mlmonkey a day ago

I'm curious: has there been any work done on generating embedding vectors instead of discrete tokens via diffusion? What would that look like? Please point me to some references. Thanks!

yugretcx a day ago

Why do these text diffusion demos always look like the number of allowed tokens is fixed for a specific unfilled region?

Is this the case?

Ie. if the region only has four tokens(here characters) but calculates the best word is “forget” does it just abandon the best fit or truncate it to fit?

Are there text diffusion models with lax infill directives?

  • rand0mwalk a day ago

    Tokens start as a special [MASK] token. Then as the diffusion process runs they are "unmasked" i.e. sampled.

    So yes, you define a sequence of [MASK] tokens with some length ahead of time.

    In practice, if a model wants to write a shorter sequence, it'll just fill the remaining tokens with empty content. If it wants to write a longer sequence, you'll have to identify this and extend the sequence with more [MASK] tokens. This is typically obvious since there's no "end of sequence" token present if the model wants to generate more.

  • nathan-barry a day ago

    Yes, this is the case. During training, the model will get a sequence of text (ex, 512 tokens long) with a percentage of them masked out (with a special <MASK> token). It learns how to unmask those tokens to construct the original text.

    In the case that you mentioned, if we had 4 <MASK> tokens in a row, all we are doing for decoding is predicting what those 4 tokens should be.

    Generally, this does not seem to be a significant problem, as there are usually multiple ways to express an idea in varying lengths. Also, with confidence-aware parallel decoding, it can usually avoid the scenario you mentioned, as focusing on decoding the highest confident tokens will generally avoid such scenarios with a well trained model.

Majromax a day ago

The basic MLP block in this model uses a ReLU^2 activation function (x <- ReLU(x)^2). That seems to be copied from the nanochat project, and it's not present in nanoGPT. Is there some documentation on the choice of this activation function?

  • throwaway2027 18 hours ago

    Isn't it because ReLU is cheap and ^2 is squared loss?

    • kouteiheika 10 hours ago

      When it comes to compute cost the choice of activation function makes little difference nowadays (and it can often be fused with whatever operation comes before it, which makes it effectively free).

      The real reason is simple: it was inherited.

      The relu^2 was used in the nanogpt speedrun[1] because it produced the best empirical results, then Andrej based his nanochat on the nanogpt speedrun without changing the activation function, and then this project was based on nanochat.

      [1] -- https://github.com/KellerJordan/modded-nanogpt

gdiamos 18 hours ago

One year later and there is still no inference engine for diffusion LLMs

Students looking for a project to break into AI - please!

embedding-shape 21 hours ago

Fun project, easy to understand and nice looking results, everything one could ask for! I played around with it locally, did some optimizations of low hanging fruits without making it much more complicated, and was gonna send over a PR. But then I noticed there is no license attached to the project. What are your plans regarding the licensing for this?

  • nathan-barry 21 hours ago

    Hey, I’ll add the MIT licenses later today!

doppelgunner 6 hours ago

This is impressive. Can it run on mobile?

tell_me_whai 20 hours ago

Looks fun, thanks for sharing. I see you're implementing game of life sampling, what's the reasoning for using this logic?