Giter VIP home page Giter VIP logo

Comments (7)

JalonSolov avatar JalonSolov commented on July 29, 2024 2

Not quite... for example, as shown above, c2v is generating inline comments, which V no longer supports.

from v.

medvednikov avatar medvednikov commented on July 29, 2024 1

с2v is tested via ci, DOOM is translated and run on each commit (and verified pixel by pixel)

so it's always updated to latest V standards

from v.

JalonSolov avatar JalonSolov commented on July 29, 2024

Have you tried doing v translate llama.cpp?

from v.

ouvaa avatar ouvaa commented on July 29, 2024

@JalonSolov

v translate llama.cpp

C2V is not installed. Cloning C2V to /root/.vmodules/c2v ...
Compiling c2v ...
C to V translator 0.4.0
"/usr/local/src/llama.cpp" is a directory, processing all C files in it recursively...

  translating ./tests/test-c.c ... c2v_output/test-c.v:334:6: error: cannot register fn `Ggml_opt_callback`, another type with this name exists
  332 | }
  333 | 
  334 | type Ggml_opt_callback = fn (voidptr, int, &f32, &bool)
      |      ~~~~~~~~~~~~~~~~~
  335 | type Ggml_log_callback = fn (Ggml_log_level, &i8, voidptr)
  336 | struct Ggml_opt_params {

Internal vfmt error while formatting file: /usr/local/src/llama.cpp/./c2v_output/test-c.v.
 took   618 ms ; output .v file: ./c2v_output/test-c.v
  translating ./ggml-alloc.c  ... c2v_output/ggml-alloc.v:97:2: error: struct embedding must be declared at the beginning of the struct body
   95 | struct Ggml_backend_i { 
   96 |     get_name fn (Ggml_backend_t) &i8
   97 |     c.free fn (Ggml_backend_t)
      |     ~~~~~~
   98 |     get_default_buffer_type fn (Ggml_backend_t) Ggml_backend_buffer_type_t
   99 |     set_tensor_async fn (Ggml_backend_t, &Ggml_tensor, voidptr, usize, usize)

Internal vfmt error while formatting file: /usr/local/src/llama.cpp/./c2v_output/ggml-alloc.v.
 took   364 ms ; output .v file: ./c2v_output/ggml-alloc.v
  translating ./ggml.c        ... c2v_output/ggml.v:23172:3: error: inline comment is deprecated, please use line comment
23170 | n_gradient_accumulation: 1, 
23171 | adam: (unnamed at ./ggml.h {
23172 |         /*FAILED TO FIND STRUCT "(unnamed at ./ggml.h"*/10000, 1, 0, 2, 0.00100000005, 0.899999976, 0.999000012, 9.99999993E-9, 9.99999974E-6, 0.00100000005, 0}
      |         ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
23173 |         , 
23174 | }

Internal vfmt error while formatting file: /usr/local/src/llama.cpp/./c2v_output/ggml.v.
 took  6821 ms ; output .v file: ./c2v_output/ggml.v
  translating ./ggml-backend.c ... c2v_output/ggml-backend.v:915:2: error: inline comment is deprecated, please use line comment
  913 | fn ggml_backend_cpu_buffer_type() Ggml_backend_buffer_type_t {
  914 |     ggml_backend_cpu_buffer_type := Ggml_backend_buffer_type {
  915 |     /*FAILED TO FIND STRUCT "Ggml_backend_buffer_type"*/Ggml_backend_buffer_type_i {
      |     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  916 |     get_name: ggml_backend_cpu_buffer_type_get_name, 
  917 | alloc_buffer: ggml_backend_cpu_buffer_type_alloc_buffer,

Internal vfmt error while formatting file: /usr/local/src/llama.cpp/./c2v_output/ggml-backend.v.
 took   468 ms ; output .v file: ./c2v_output/ggml-backend.v
  translating ./ggml-quants.c ... c2v_output/ggml-quants.v:240:3: error: use `?` instead of `?void`
  238 | fn quantize_row_q4_0_reference(x &Float, y &Block_q4_0, k int)  {
  239 |     qk := 32
  240 |     (void(sizeof(if (k % qk == 0){ 1 } else {0})) , )
      |      ~~~~
  241 |     nb := k / qk
  242 |     for i := 0 ; i < nb ; i ++ {

Internal vfmt error while formatting file: /usr/local/src/llama.cpp/./c2v_output/ggml-quants.v.
 took  1905 ms ; output .v file: ./c2v_output/ggml-quants.v
  translating ./ggml-mpi.c    ... ./ggml-mpi.c:5:10: fatal error: 'mpi.h' file not found
#include <mpi.h>
         ^~~~~~~
1 error generated.

The file ./ggml-mpi.c could not be parsed as a C source file.
C2V command: '/root/.vmodules/c2v/c2v' 'llama.cpp'
C2V failed to translate the C files. Please report it via GitHub.

from v.

JalonSolov avatar JalonSolov commented on July 29, 2024

The last one is easy enough... you have to tell c2v (the program called by v translate) where the mpi.h header can be found. See https://github.com/vlang/c2v?tab=readme-ov-file#configuration for details on how to do that.

The rest of those messages, though, are bugs in c2v. Looks like it hasn't been updated to latest V standards. :-\

from v.

hholst80 avatar hholst80 commented on July 29, 2024

@ouvaa is there a version without MPI? MPI is a very special high performance message passing library. It would be nice to get llama2.c working first?

https://github.com/karpathy/llama2.c

from v.

trufae avatar trufae commented on July 29, 2024

Llama is c++ not c. So dont expect c2v to work. Also llama is moving really fast with lots of changes everyday and support for many hardware. I dont think it makes sense to port it to v. But maybe just wrap the public c api from v. Like its done in the llama-cpp-python api.

for fun purposes porting gpt2 to c which is already in c should be easy and fun. But wont compete with perf or features offered by llama, also supporting gguf is a key feature for any inference engine nowadays

from v.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.