Black Friday code is open source, ChatGPT assistant and its future
Hi,
how was your week?
Mine was quite interesting. Major highlights were: NixOS, using event sourcing in a discrete-event simulation (we are back to C# here) and finding a few good uses for ChatGPT.
To avoid getting too excited, in this newsletter I'll continue writing about the Black Friday and will talk a bit out ChatGPT.
Black Friday code is Open Source
Most of the "black friday" experiment code is back to being stable (more about it is in my blog). If you are interested, I made the repository public on Github.
This is my implementation of a Trustbit Black Friday kata. I went for a single-aggregate design in golang, using SQLite as the underlying storage.
You wouldn't see the actual event store, because it is an implementation detail that has to be added around "tx.Commit()". The important part - events are driving the state change.
This code isn't clean or perfect, but it is good enough to be shared with the community.
My colleague Christian Folie has been making progress on his F# implementation of the same domain. His implementation converges towards multiple aggregates with a hierarchy of locations and sagas.
Both implementations are tested by the same set of event-driven tests, provided by the bfkata repository. If you want to take a stab at implementing this kata in your own language, just check it out and tell me what you think!
ChatGPT Assistant
I'm really impressed by ChatGPT and Large Language Models in general (LLMs). Earlier this week, I had to build a small Python project to demo Nix and Python packaging capabilities.
I didn't care much about the implementation detail, just wanted it to use PyTorch, have a training, API and a nice UI. Through a set of prompts ChatGPT generated:
Python code that used PyTorch and Iris dataset to train a model;
JSON API for prediction with Flask;
Web UI to invoke that API;
CSS styles to make everything look "like Apple website".
You can find screenshots in this twitter thread. Here is the core machine-generated code: main.py and index.html.
ChatGPT can't replace real people. I wouldn't trust it to write texts for me, nor would I trust it to write important code. However, it is a good assistant.
For instance, it could:
(1) given a piece of code - write unit tests for it
(2) given an API - write a mock that returns some data
(3) given an API - generate web UI to interact with that API
I could write this boring code, too, but it will take time. I'm especially slow with anything that involves HTML and CSS.
ChatGPT can do all this instantly, again and again. I just need to review and tell it "make the button green and rewrite everything in async", "add Makefile to build and test" or "replace with bootstrap styles".
A human assistant would go crazy after 10th "rewrite everything but also add X". Large Language Model would never do that.
If you haven't tried it, I strongly recommend you to give it a try!
Future of Large Language Models
I think, Large Language Models are here to stay. There is no denying that.
Access might be limited or pricey in the mid-term. Replit Ghostwriter is 10 USD per month, just like Github Copilot; OpenAI considers to price ChatGPT at 20 USD per month.
Plus, ChatGPT is based on GPT-3 with 175 billion parameters, while GPT-4 is planned to have 100 trillion parameters. Running that is bound to be very computationally expensive.
However, I'm betting on open source. There could be another "StableDiffusion" incident waiting to happen, where open source community just comes and releases a decent model capable of running on a commodity hardware. Open Assistant is exactly the project that aims to do that. Currently they are collecting a dataset of human prompt-response samples. Training will take place afterwards.
Long story short, I think, the future is going to be exciting. It could make us more productive, capable of delivering more value and getting rewarded for that.
Talk to you next week!
With best regards,
Rinat
PS: Here are a few links I found particularly interesting this week:
Who owns the generative AI platform - companies that provide resources to run ML models will make most of the money.
Carving the scheduler out of the orchestrator - How fly.io migrates away from Nomad.