Minix and nanoGPT

Jan 12, 2023

Minix was a Unix-like operating system developed in 1987 by Andrew S. Tanenbaum (ast). He developed Minix as a teaching tool – it's included in many examples for his book, Operating Systems: Design and Implementation, which also included the source code of Minix. Tanenbaum is also known for his work on the idea of microkernels, as well as having Werner Vogels (CTO of Amazon) as one of his PhD students.

Minix was minimal (mini-unix) – easy to understand, low system requirements, an inexpensive license ($69), and less than 12,000 lines of code.

A young Linus Torvalds would come across the Minix community Usenet group (which had grown to 40,000 users). Inspired by the operating system and the accompanying book, he built his own operating system that, in some ways, resembled Minix.

From:[email protected] (Linus Benedict Torvalds)
Newsgroup: comp.os.minix
Subject: What would you like to see most in minix?
Summary: small poll for my new operating system
Message-ID: 1991Aug25, [email protected]
Date: 25 Aug 91 20:57:08 GMT
Organization: University of Helsinki.

Hello everybody out there using minix-

I’m doing a (free) operating system (just a hobby, won’t be big
and professional like gnu) for 386(486) AT clones. This has
been brewing since april, and is starting to get ready. I’d like
any feedback on things people like/dislike in minix; as my OS
resembles it somewhat (same physical layout of the file-sytem
due to practical reasons)among other things.

I’ve currently ported bash (1.08) an gcc (1.40), and things seem to work.
This implies that i’ll get something practical within a few months, and I’d
like to know what features most people want. Any suggestions are welcome,
but I won’t promise I’ll implement them 🙂

Linus Torvalds [email protected]

nanoGPT is a project published under Andrej Karpathy's GitHub. It's described as the following:

The simplest, fastest repository for training/finetuning medium-sized GPTs. It is a rewrite of minGPT that prioritizes teeth over education. Still under active development, but currently the file train.py reproduces GPT-2 (124M) on OpenWebText, running on a single 8XA100 40GB node in 38 hours of training. The code itself is plain and readable: train.py is a ~300-line boilerplate training loop and model.py a ~300-line GPT model definition, which can optionally load the GPT-2 weights from OpenAI. That's it.

Readable, pedagogical, simple, minimal. I wonder if there will be a Linux to Karpathy's nanoGPT.

Subscribe for email updates on startups, engineering, and AI