1
0
Fork 0
mirror of https://github.com/karpathy/minGPT synced 2024-05-04 06:36:10 +02:00
Commit Graph

39 Commits

Author SHA1 Message Date
Mishig Davaadorj 90420ee978
Use XOR operator `^` for checking assertion `type_given XOR params_given`
Use XOR operator `^` for checking assertion `type_given XOR params_given` in `GPT.__init__`
2022-07-28 22:33:51 +02:00
Luigi Di Sotto c4c650e3d5 Add optimizer to Trainer's self for callbacks. 2022-07-26 10:17:44 +02:00
Andrej e2065c59c6 use a bit more extended example that has my last name too because nice to show how it breaks up into more tokens 2022-07-12 04:31:31 +00:00
Andrej d8dd157f9c add a full example into the script as well 2022-07-12 04:25:17 +00:00
Nat Friedman e9f6e3d448
Typos
Fixed some small typos.
2022-07-11 20:55:38 -07:00
Andrej 9642f40b83 add a refactored BPE encoder from openai. Basically I dont super trust the huggingface tokenizer, the implementation sprawls multiple files and inheritance and has special magic handling around AddedTokens that I don't fully follow. Prefer to roll our own explicit implementation here that exactly mirrors the code of OpenAI and nothing else 2022-07-12 02:01:41 +00:00
Andrej acaadacd59 refactor sequence generation into the model and match the huggingface/transformers api. touches everything but this makes a lot more sense to me aesthetically 2022-07-11 18:50:53 +00:00
Andrej 803f38800d refactor pretrained weight loading into from_pretrained and add unit tests 2022-07-08 22:56:15 +00:00
Andrej 4a56b20f80 fix parameter counting 2022-07-08 21:10:54 +00:00
Andrej 2e979dde5f ummm eyeroll 2022-07-01 15:34:34 +00:00
Andrej Karpathy 2f3400f42a split out register_callback to set/add 2022-07-01 08:32:19 -07:00
Andrej Karpathy d9ea878100 add maxiters to trainer 2022-07-01 08:31:46 -07:00
Andrej 00aa9cb2ed ok i hated the previous global/local config idea. reverting it and simplying and i think this is the best api so far 2022-06-27 20:41:01 +00:00
Andrej ea20661f78 be more defensive around model_type, don't let the user shoot themselves in the foot 2022-06-27 19:26:26 +00:00
Andrej b483fbe8db suppress warnings and lightweight docs and changes 2022-06-24 20:48:05 +00:00
Andrej c6c973738b implement scaled init per gpt-2 paper 2022-06-24 17:48:20 +00:00
Andrej 7e68832554 delete ugly Conv1D, a real abomination of this Universe 2022-06-24 03:22:27 +00:00
Andrej 13a42a6ce0 ok step 1, create a get_pretrained function that inits with openai weights' 2022-06-24 01:43:39 +00:00
Andrej dfb892044d big big refactor so that we can load actual gpt2 weights from openai. this is will wip, want to clean it up good 2022-06-23 23:33:44 +00:00
Andrej 3cf811e67c delegate more stuff to the Trainer class 2022-06-01 17:55:36 +00:00
Andrej 8860486f66 attempt to make model config a little bit better, still hate it 2022-06-01 17:14:22 +00:00
Andrej Karpathy 9ec160cd8c small tweaks. found an issue with my brilliant plan to solve all configuration problems. have to think about more 2022-05-28 15:05:34 -07:00
Andrej 82768a7a95 small tweaks and a bug fix that makes me doubt the current approach with the configs a bit... shop myself in the foot a bit 2022-05-28 03:44:32 +00:00
Andrej b162d3f44e fix small bugs and add ability to train/eval on either cpu or gpu 2022-05-28 03:17:24 +00:00
Andrej Karpathy fa1b46f78a bit more logging, including saving a model but only if it's the best one yet 2022-05-27 16:06:31 -07:00
Andrej Karpathy a330148c22 add ability to override config params from command line args. re-inventing the wheel a bit here, should i just use yacs or something? i just really really really do not like dependencies 2022-05-27 12:16:07 -07:00
Andrej Karpathy 8425759c24 early work, refactoring the adder first 2022-05-27 10:04:52 -07:00
Andrej Karpathy 3ed14b2cec i know it doesn't look like much, but this kwarg was not used lol :D 2022-03-27 17:48:05 +01:00
Andrej Karpathy 107b6d7e31 add comment to clarify #39 . Ty @JonathanSum for inspiration PR 2022-03-26 13:52:51 +00:00
Andrej Karpathy 031ad36f29 don't use default kwargs, in my experience lead to bugs always 2022-03-26 13:47:52 +00:00
Thomas Viehmann 176be2d9bf initialize position embeddings 2022-03-26 13:36:20 +00:00
waynemystir 8fcaafb367 move instantiation of DataLoader 2020-11-20 13:44:49 -05:00
Andrej Karpathy 339f4e7ad3 fix dataloader issue pointed out by @fpgaminer in #28 and introduce shuffle=True and pin_memory=True as defaults. That said I'm still not very happy with this demo because we're likely overfitting a massive model to tiny text and nothing is really tuned at all. This needs a real train/test dataset and a tiny bit of hyperparameter search, todo. 2020-08-24 23:23:53 -07:00
Andrej Karpathy 63902c8d09 remove passive aggressive comment. control yourself andrej. 2020-08-23 19:36:23 -07:00
Andrej Karpathy 38d7327dfd instead of -1e10 use float -inf, which I think will play nicer with fp16 down the line 2020-08-23 17:47:05 -07:00
“Andrej bbbdac74fa properly separate params that should be weight decayed, and make a small incremental step towards Lightning compatibility by creating the optimizer object inside the model's configure_optimizers 2020-08-23 15:48:20 -07:00
“Andrej 23982656df add early stopping logic 2020-08-23 15:09:09 -07:00
Andrej Karpathy d708b1e5e2 fix a dumb bug, intended to use -1e10 instead of 1e-10. thank you @fpgaminer for spotting and bringing to my attention 2020-08-18 17:05:59 -07:00
Andrej Karpathy 0d9d098cd2 first commit, able to multigpu train fp32 GPTs on math and character-level data, but have done barely any tuning. 2020-08-17 00:39:02 -07:00