• ↑↓ pour naviguer
  • pour ouvrir
  • pour sélectionner
  • ⌘ ⌥ ↵ pour ouvrir dans un panneau
  • esc pour rejeter
⌘ '
raccourcis clavier

large language models, often implemented as autoregressive transformers models.

GPTs and friends

Most variants of LLMs are decoder-only (Radford et al., 2019)

Have “capabilities” to understand natural language.

Exhibits emergent behaviour of intelligence, but probably not AGI due to observer-expectancy effect.

One way or another is a form of behaviourism, through reinforcement learning. It is being “told” what is good or bad, and thus act accordingly towards the users. However, this induces confirmation bias where one aligns and contains his/her prejudices towards the problem.

Scalability

Incredibly hard to scale, mainly due to their large memory footprint and tokens memory allocation.

Optimization

I did a talk at HackTheNorth 2023 on this topic and rationale behind building OpenLLM

  • Quantization: reduce computational and memory costs of running inference with representing the weight and activations with low-precision data type
  • Continuous batching: Implementing Paged Attention with custom scheduler to manage swapping kv-cache for better resource utilisation
  • Different Attention variants, for better kernels and hardware optimisation (Think of Flash Attention 3, Radix Attention, TreeAttention, etc.)
  • Byte-Latent Transformer: idea to use entropy-based sampling to choose next tokens instead of token-level decoding. 1

on how we are being taught.

How would we assess thinking?

Similar to calculator, it simplifies and increase accessibility to the masses, but in doing so lost the value in the action of doing math.

We do math to internalise the concept, and practice to thinking coherently. Similarly, we write to help crystalised our ideas, and in the process improve through the act of putting it down.

The process of rephrasing and arranging sentences poses a challenges for the writer, and in doing so, teach you how to think coherently. Writing essays is an exercise for students to articulate their thoughts, rather than testing the understanding of the materials.

on ethics

See also Alignment.

There are ethical concerns with the act of “hallucinating” content, therefore alignment research is crucial to ensure that the model is not producing harmful content.

For medicare, ethical implications requires us to develop better interpretable models

as philosophical tool.

To create a better representations of the world for both humans and machines to understand, we can truly have assistive tools to enhance our understanding of the world surround us

AI generated content

Don’t shit where you eat, Garbage in, garbage out. The quality of the content is highly dependent on the quality of the data it was trained on, or model are incredibly sensitive to data variances and biases.

Bland doublespeak

See also: All the better to see you with

machine-assisted writings

source: creative fiction with GPT-3

Idea: use sparse autoencoders to guide ideas generations

Good-enough

This only occurs if you only need a “good-enough” item where value outweighs the process.

However, one should always consider to put in the work, rather than being “ok” with good enough. In the process of working through a problem, one will learn about bottleneck and problems to be solved, which in turn gain invaluable experience otherwise would not achieved if one fully relies on the interaction with the models alone.

Programming

Overall should be a net positive, but it’s a double-edged sword.

as end-users

Source

I think it’s likely that soon all computer users will have the ability to develop small software tools from scratch, and to describe modifications they’d like made to software they’re already using

as developers

Tool that lower of barrier of entry is always a good thing, but it often will lead to probably even higher discrepancies in quality of software

Increased in productivity, but also increased in technical debt, as these generated code are mostly “bad” code, and often we have to nudge and do a lot of prompt engineering.


mechanistic interpretability

whirlwind tour, initial exploration, glossary

The subfield of alignment that delves into reverse engineering of a neural network, especially LLMs

To attack the curse of dimensionality, the question remains: how do we hope to understand a function over such a large space, without an exponential amount of time? 2

Topics:

open problems

Sharkey et al. (2025)

  • differentiate between “reverse engineering” versus “concept-based”
    • reverse engineer:
      • decomposition hypotheses validation
    • drawbacks with SDL:

inference

Application in the wild: Goodfire and Transluce

idea: treat SAEs as a logit bias, similar to guided decoding

steering

refers to the process of manually modifying certain activations and hidden state of the neural net to influence its outputs

For example, the following is a toy example of how a decoder-only transformers (i.e: GPT-2) generate text given the prompt “The weather in California is”

flowchart LR
  A[The weather in California is] --> B[H0] --> D[H1] --> E[H2] --> C[... hot]

To steer to model, we modify H2H_2 layers with certain features amplifier with scale 20 (called it H3H_{3})3

flowchart LR
  A[The weather in California is] --> B[H0] --> D[H1] --> E[H3] --> C[... cold]

One usually use techniques such as sparse autoencoders to decompose model activations into a set of interpretable features.

For feature ablation, we observe that manipulation of features activation can be strengthened or weakened to directly influence the model’s outputs

example: Panickssery et al. (2024) uses contrastive activation additions to steer Llama 2

contrastive activation additions

intuition: using a contrast pair for steering vector additions at certain activations layers

Uses mean difference which produce difference vector similar to PCA:

Given a dataset D\mathcal{D} of prompt pp with positive completion cpc_p and negative completion cnc_n, we calculate mean-difference vMDv_\text{MD} at layer LL as follow:

vMD=1Dp,cp,cnDaL(p,cp)aL(p,cn)v_\text{MD} = \frac{1}{\mid \mathcal{D} \mid} \sum_{p,c_p,c_n \in \mathcal{D}} a_L(p,c_p) - a_L(p, c_n)

implication

by steering existing learned representations of behaviors, CAA results in better out-of-distribution generalization than basic supervised finetuning of the entire model.

superposition hypothesis

tl/dr

phenomena when a neural network represents more than nn features in a nn-dimensional space

Linear representation of neurons can represent more features than dimensions. As sparsity increases, model use superposition to represent more features than dimensions.

neural networks “want to represent more features than they have neurons”.

When features are sparsed, superposition allows compression beyond what linear model can do, at a cost of interference that requires non-linear filtering.

reasoning: “noisy simulation”, where small neural networks exploit feature sparsity and properties of high-dimensional spaces to approximately simulate much larger much sparser neural networks

In a sense, superposition is a form of lossy compression

importance

  • sparsity: how frequently is it in the input?

  • importance: how useful is it for lowering loss?

over-complete basis

reasoning for the set of nn directions 4

features

A property of an input to the model

When we talk about features (Elhage et al., 2022, p. see “Empirical Phenomena”), the theory building around several observed empirical phenomena:

  1. Word Embeddings: have direction which corresponding to semantic properties (Mikolov et al., 2013). For example:
    V(king) - V(man) = V(monarch)
  2. Latent space: similar vector arithmetics and interpretable directions have also been found in generative adversarial network.

We can define features as properties of inputs which a sufficiently large neural network will reliably dedicate a neuron to represent (Elhage et al., 2022, p. see “Features as Direction”)

ablation

refers to the process of removing a subset of a model’s parameters to evaluate its predictions outcome.

idea: deletes one activation of the network to see how performance on a task changes.

  • zero ablation or pruning: Deletion by setting activations to zero
  • mean ablation: Deletion by setting activations to the mean of the dataset
  • random ablation or resampling

residual stream

Residual stream illustration
figure1: Residual stream illustration

intuition: we can think of residual as highway networks, in a sense portrays linearity of the network 5

residual stream x0x_{0} has dimension (C,E)\mathit{(C,E)} where

  • C\mathit{C}: the number of tokens in context windows and
  • E\mathit{E}: embedding dimension.

Attention mechanism H\mathit{H} process given residual stream x0x_{0} as the result is added back to x1x_{1}:

x1=H(x0)+x0x_{1} = \mathit{H}{(x_{0})} + x_{0}

grokking

See also: writeup, code, circuit threads

A phenomena discovered by (Power et al., 2022) where small algorithmic tasks like modular addition will initially memorise training data, but after a long time ti will suddenly learn to generalise to unseen data

empirical claims

related to phase change

Lien vers l'original