Andro0s commited on
Commit
497a064
·
verified ·
1 Parent(s): 99eb9a1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +166 -9
README.md CHANGED
@@ -1,12 +1,169 @@
1
  ---
2
- title: Gpt J
3
- emoji: 🚀
4
- colorFrom: green
5
- colorTo: blue
6
- sdk: gradio
7
- sdk_version: 5.49.1
8
- app_file: app.py
9
- pinned: false
10
  ---
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - pytorch
6
+ - causal-lm
7
+ license: apache-2.0
8
+ datasets:
9
+ - EleutherAI/pile
10
  ---
11
 
12
+ # GPT-J 6B
13
+
14
+ ## Model Description
15
+
16
+ GPT-J 6B is a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.
17
+
18
+ <figure>
19
+
20
+ | Hyperparameter | Value |
21
+ |----------------------|------------|
22
+ | \\(n_{parameters}\\) | 6053381344 |
23
+ | \\(n_{layers}\\) | 28&ast; |
24
+ | \\(d_{model}\\) | 4096 |
25
+ | \\(d_{ff}\\) | 16384 |
26
+ | \\(n_{heads}\\) | 16 |
27
+ | \\(d_{head}\\) | 256 |
28
+ | \\(n_{ctx}\\) | 2048 |
29
+ | \\(n_{vocab}\\) | 50257/50400&dagger; (same tokenizer as GPT-2/3) |
30
+ | Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
31
+ | RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
32
+ <figcaption><p><strong>&ast;</strong> Each layer consists of one feedforward block and one self attention block.</p>
33
+ <p><strong>&dagger;</strong> Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.</p></figcaption></figure>
34
+
35
+ The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
36
+ dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64
37
+ dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as
38
+ GPT-2/GPT-3.
39
+
40
+ ## Intended Use and Limitations
41
+
42
+ GPT-J learns an inner representation of the English language that can be used to
43
+ extract features useful for downstream tasks. The model is best at what it was
44
+ pretrained for however, which is generating text from a prompt.
45
+
46
+ ### Out-of-scope use
47
+
48
+ GPT-J-6B is **not** intended for deployment without fine-tuning, supervision,
49
+ and/or moderation. It is not a in itself a product and cannot be used for
50
+ human-facing interactions. For example, the model may generate harmful or
51
+ offensive text. Please evaluate the risks associated with your particular use case.
52
+
53
+ GPT-J-6B was trained on an English-language only dataset, and is thus **not**
54
+ suitable for translation or generating text in other languages.
55
+
56
+ GPT-J-6B has not been fine-tuned for downstream contexts in which
57
+ language models are commonly deployed, such as writing genre prose,
58
+ or commercial chatbots. This means GPT-J-6B will **not**
59
+ respond to a given prompt the way a product like ChatGPT does. This is because,
60
+ unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
61
+ Learning from Human Feedback (RLHF) to better “follow” human instructions.
62
+
63
+ ### Limitations and Biases
64
+
65
+ The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output.
66
+
67
+ GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile.
68
+
69
+ As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
70
+
71
+ ### How to use
72
+
73
+ This model can be easily loaded using the `AutoModelForCausalLM` functionality:
74
+
75
+ ```python
76
+ from transformers import AutoTokenizer, AutoModelForCausalLM
77
+
78
+ tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
79
+ model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B")
80
+ ```
81
+
82
+ ## Training data
83
+
84
+ GPT-J 6B was trained on [the Pile](https://pile.eleuther.ai), a large-scale curated dataset created by [EleutherAI](https://www.eleuther.ai).
85
+
86
+ ## Training procedure
87
+
88
+ This model was trained for 402 billion tokens over 383,500 steps on TPU v3-256 pod. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly.
89
+
90
+ ## Evaluation results
91
+
92
+ <figure>
93
+
94
+ | Model | Public | Training FLOPs | LAMBADA PPL ↓ | LAMBADA Acc ↑ | Winogrande ↑ | Hellaswag ↑ | PIQA ↑ | Dataset Size (GB) |
95
+ |--------------------------|-------------|----------------|--- |--- |--- |--- |--- |-------------------|
96
+ | Random Chance | &check; | 0 | ~a lot | ~0% | 50% | 25% | 25% | 0 |
97
+ | GPT-3 Ada&ddagger; | &cross; | ----- | 9.95 | 51.6% | 52.9% | 43.4% | 70.5% | ----- |
98
+ | GPT-2 1.5B | &check; | ----- | 10.63 | 51.21% | 59.4% | 50.9% | 70.8% | 40 |
99
+ | GPT-Neo 1.3B&ddagger; | &check; | 3.0e21 | 7.50 | 57.2% | 55.0% | 48.9% | 71.1% | 825 |
100
+ | Megatron-2.5B&ast; | &cross; | 2.4e21 | ----- | 61.7% | ----- | ----- | ----- | 174 |
101
+ | GPT-Neo 2.7B&ddagger; | &check; | 6.8e21 | 5.63 | 62.2% | 56.5% | 55.8% | 73.0% | 825 |
102
+ | GPT-3 1.3B&ast;&ddagger; | &cross; | 2.4e21 | 5.44 | 63.6% | 58.7% | 54.7% | 75.1% | ~800 |
103
+ | GPT-3 Babbage&ddagger; | &cross; | ----- | 5.58 | 62.4% | 59.0% | 54.5% | 75.5% | ----- |
104
+ | Megatron-8.3B&ast; | &cross; | 7.8e21 | ----- | 66.5% | ----- | ----- | ----- | 174 |
105
+ | GPT-3 2.7B&ast;&ddagger; | &cross; | 4.8e21 | 4.60 | 67.1% | 62.3% | 62.8% | 75.6% | ~800 |
106
+ | Megatron-11B&dagger; | &check; | 1.0e22 | ----- | ----- | ----- | ----- | ----- | 161 |
107
+ | **GPT-J 6B&ddagger;** | **&check;** | **1.5e22** | **3.99** | **69.7%** | **65.3%** | **66.1%** | **76.5%** | **825** |
108
+ | GPT-3 6.7B&ast;&ddagger; | &cross; | 1.2e22 | 4.00 | 70.3% | 64.5% | 67.4% | 78.0% | ~800 |
109
+ | GPT-3 Curie&ddagger; | &cross; | ----- | 4.00 | 69.3% | 65.6% | 68.5% | 77.9% | ----- |
110
+ | GPT-3 13B&ast;&ddagger; | &cross; | 2.3e22 | 3.56 | 72.5% | 67.9% | 70.9% | 78.5% | ~800 |
111
+ | GPT-3 175B&ast;&ddagger; | &cross; | 3.1e23 | 3.00 | 76.2% | 70.2% | 78.9% | 81.0% | ~800 |
112
+ | GPT-3 Davinci&ddagger; | &cross; | ----- | 3.0 | 75% | 72% | 78% | 80% | ----- |
113
+ <figcaption><p>Models roughly sorted by performance, or by FLOPs if not available.</p>
114
+
115
+ <p><strong>&ast;</strong> Evaluation numbers reported by their respective authors. All other numbers are provided by
116
+ running <a href="https://github.com/EleutherAI/lm-evaluation-harness/"><code>lm-evaluation-harness</code></a> either with released
117
+ weights or with API access. Due to subtle implementation differences as well as different zero shot task framing, these
118
+ might not be directly comparable. See <a href="https://blog.eleuther.ai/gpt3-model-sizes/">this blog post</a> for more
119
+ details.</p>
120
+
121
+ <p><strong>†</strong> Megatron-11B provides no comparable metrics, and several implementations using the released weights do not
122
+ reproduce the generation quality and evaluations. (see <a href="https://github.com/huggingface/transformers/pull/10301">1</a>
123
+ <a href="https://github.com/pytorch/fairseq/issues/2358">2</a> <a href="https://github.com/pytorch/fairseq/issues/2719">3</a>)
124
+ Thus, evaluation was not attempted.</p>
125
+
126
+ <p><strong>‡</strong> These models have been trained with data which contains possible test set contamination. The OpenAI GPT-3 models
127
+ failed to deduplicate training data for certain test sets, while the GPT-Neo models as well as this one is
128
+ trained on the Pile, which has not been deduplicated against any test sets.</p></figcaption></figure>
129
+
130
+ ## Citation and Related Information
131
+
132
+ ### BibTeX entry
133
+
134
+ To cite this model:
135
+ ```bibtex
136
+ @misc{gpt-j,
137
+ author = {Wang, Ben and Komatsuzaki, Aran},
138
+ title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}},
139
+ howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
140
+ year = 2021,
141
+ month = May
142
+ }
143
+ ```
144
+
145
+ To cite the codebase that trained this model:
146
+ ```bibtex
147
+ @misc{mesh-transformer-jax,
148
+ author = {Wang, Ben},
149
+ title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}},
150
+ howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
151
+ year = 2021,
152
+ month = May
153
+ }
154
+ ```
155
+
156
+ If you use this model, we would love to hear about it! Reach out on [GitHub](https://github.com/kingoflolz/mesh-transformer-jax), Discord, or shoot Ben an email.
157
+
158
+ ## Acknowledgements
159
+
160
+ This project would not have been possible without compute generously provided by Google through the
161
+ [TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha.
162
+
163
+ Thanks to everyone who have helped out one way or another (listed alphabetically):
164
+ - [James Bradbury](https://twitter.com/jekbradbury) for valuable assistance with debugging JAX issues.
165
+ - [Stella Biderman](https://www.stellabiderman.com), [Eric Hallahan](https://twitter.com/erichallahan), [Kurumuz](https://github.com/kurumuz/), and [Finetune](https://github.com/finetuneanon/) for converting the model to be compatible with the `transformers` package.
166
+ - [Leo Gao](https://twitter.com/nabla_theta) for running zero shot evaluations for the baseline models for the table.
167
+ - [Laurence Golding](https://github.com/researcher2/) for adding some features to the web demo.
168
+ - [Aran Komatsuzaki](https://twitter.com/arankomatsuzaki) for advice with experiment design and writing the blog posts.
169
+ - [Janko Prester](https://github.com/jprester/) for creating the web demo frontend.