Every time I come across a skeptic article like this, I just remind myself of the sarcastic paper "On the Impossibility of Supersized Machines" and can’t help but laugh.
I'm not really a programer, just light scripting. But, I got to admit, I had an LLM build me out a LLM python test script that would, list all available LLMs, allow me to pick or run ALL, then it would load an LLM, run some tests, unload and load the next llm, repeat, then give me the results in a comma delimited format.
And it wrote it without any issues. (1 little thing, I was in WSL and had to call a windows binary to unload the model from lm-studio, but was an easy fix)
It seems to me, llm writing code has some interesting areas, gets people learning code, writes basic code for users, and helps solve problems when stumped. It wont take over real dev jobs, but will help everyone. LLM's are like a teacher, we just need to make sure the code it gives are correct, lots of spin off jobs to come out of it.
I'm even started asking LLM's for questions like "whats a good free program for xyz", how do you remap keys on a mac like a pc, etc. Its been very helpful.
And fabric and ollama to parse data (news/video text/urls) to break down data in all types of categories. And I've only scratched the service on Fabric usage.
I'm having a blast using LLMs for projects outside my skillset, but I always wanted to do. Even going to make an RPG game with AI music/art/code soon.
Future looks awesome to me.
While real world professional programming will continue for a long time to depend on well honed human judgement and skill, the real value for LLMs that write code is that it democratizes access to toolkits and languages that would otherwise have too steep of a learning curve for the layman to access in a reasonable time.
I'm using GitHub's CoPilot to write a React application, something I wouldn't have contemplated with my current level of brain fog and risk aversion. I've bounced off some dead ends, but so far I'm making progress with my cheap as heck virtual assistant programmer who happens to know React and other things as a team mate.
My guess is that by testing and introspecting a lot of small models trained on datasets and distilled and much as possible, there will eventually be a good understanding of the types of neural circuits that are effective for different types of representations or tasks. (Although presumably there is more than one way to accomplish any given task or representation efficiently.)
I bet the hard part is creating compressed and decomposed representations that really encompass all of the common edge cases. And maintaining a good compression ratio without having conflicting uses for the same neuron.
But with small networks, it seems like there will be a way to fully analyze, for example, how language is grounded in visual and spatial-temporal data.
Maybe we can use another network to help come up with circuit factorings and connections for specific use cases given a Q&A or conversation dataset.
(1) we are past the point of this headline. (2) remaking software development to work with LLM’s is going to take more than two years. (3) we’re making great progress.
First AI is not real artificial intelligence, it can be view as a very complex look-up and analysis process.
For development, you need to thing about what you are doing. To me AI still has decades to go before it becomes a reality.
When you're doing your 12983rd page in React, do you REALLY need to consider every angle of it or are you just typing away from muscle memory?
These are the things "AI" is good at, it's Intellisense on actual steroids. It can take your project's style and structure into consideration and create suggestions based on that.
It's not for breaking new ground and doing cutting-edge software engineering.
I agree, but we'll see what motivation hundreds of billions of dollars of cost saving (i.e. people's livelihoods) produces I guess?
This is so true. All my Web development coding mistakes used to be typos which are now one thing less to worry about. unreal
Upon reflection, what we're doing with LLMs is filling a new void. Once upon a time most machines and devices, and software, had to come with really good documentation. Digital Equipment Corporation used to turn out huge quantities of really good stuff.
Now we have LLMs that have been fed vast quantities of code based on the collective hunches of humanity. Due to the nature of this source, the result is simultaneously more productive, yet lower quality.
Another "LLM skeptic" arguing it is impossible for LLMs to ever reliably answer a certain type of prompt, because they can't answer it reliably in 2024.
Every previous skeptic making this argument has been proven false so far
What these arguments usually miss is that precision doesn’t really matter to an expert user.
A professional, experienced engineer can filter the crap from the real. LLMs usually get me over the mental hump of project setup/boilerplate. They also let me rapidly prototype vague ideas in record time.
They also make the bar for a productive junior engineer much higher, as an LLM can perform a precisely defined task much faster than a junior engineer.
Some people are really great at reading code, and they’re the best equipped to take advantage of LLMs in their current slightly unreliable state. If you can read and fix the LLMs code faster than you can write it from scratch then it’s a net win.
Even taking this as completely true, what does it mean for the field?
How can one become a professional, experienced engineer if one has not put in multiple years being a productive junior engineer?
And what happens if those jobs are no longer available?
My bet is that LLM tools will do for tech what cloud, SaaS, and other productivity tools did before. Make productive users better and faster.
My first job in 2010 required 12 operations personnel to manage a simple app server + database. The db had a dba oncall of 3 people, along with oracle support contracts. These days that job would be done using cloud tools, and that team of 12 people wouldn’t exist.
The same story has been true throughout the infrastructure space for 15+ years, with current practices foisting the remaining work on software teams.
I suspect junior engineers who can make effective use of LLM tools will be in high demand. LLMs also minimize the benefit of experience provided you can still sift BS vs truth. A firm ultimately only cares about who delivers the most impact.
The same thing that has happened forever. People practice and gain skills on their own time before being welcomed into the professional community. Skilled new college graduate programmers have often put in a decade or more of learning before they graduate because they start young. They may have taught themselves many programming languages, studied many open source libraries, learned about profiling and efficiency tradeoffs, etc. Companies with serious engineering needs simply don’t want and haven’t wanted those who lack this natural desire to build and iterate. The only change is that the bar continues to rise a bit and all people are now competing at one level or another with machines.
> They also make the bar for a productive junior engineer much higher
This is the paradox that I'm seeing everywhere. While LLMs ostensibly make it easier for inexperienced engineers to get started, in practice it makes juniors irrelevant in the market. A senior with an LLM is more productive than a senior with a team of juniors. So while LLMs may make programming accessible to more people, it simultaneously kills their chances of employment.
I predict the software market is going to shift away from entry jobs entirely. They can be replaced with LLMs under an experienced hand. Which makes senior engineers more valuable. How are we going to train seniors if we're not hiring juniors? ¯\_(ツ)_/¯
More like a stepping stone. LLM’s are Huginn, the next step is Muninn.
Excellently put. Haven't heard of Huginn/Muninn before.
Norse gods ... 2 ravens of Odin's
imo this misses tool use as a way to break down functionality into other models or classically programmed components, which gets you much of the isolation and composability that the author is after.
We need a LLM which can understand SOLID.
"Why autocomplete within software development may be a dead end"
I can't think of many things more boring than arguing whether "autocomplete is bad" or "autocomplete is superpower".
Autocomplete is more advanced = saves you more time at first, but because you still need to review the autocompleted part and know exactly how it works it does not save as much time as you thought.
News at 11.
For 100500th time it turns out that typing speed is not the bottleneck if you are doing anything original and generating more code in less time is not such a win as you'd think. However, we don't always get to do original stuff, and 90% of programmers do unoriginal stuff 90% of the time.
An interesting question is whether autocomplete that autocompletes based on others code without their permission is stealing but even that is less interesting in software because mostly licenses are permissive (not so much in art etc)
I feel this application of LLM is a lazy and a bit braindead.
We are bestowed the power of gods and instead of thinking about creating something new and placing that power in the hands of the users you.. do the same thing you were doing before but delegating your gruntwork to the machine so you can.. what go out for drinks after 5?
It's either that or people who have no experience building apps and don't have the kind of money to pay a dev, bless them, at least they are doing something that's new to them.
Devs trying out Devin and that kind of stuff have no excuse.