My company is doing the same.
I’m in the latter part of my career so have a lot of experience. I’ve used it to do some market analysis. Bascially a comparison of machined component costs from various regions of the world. There is a lot of grey area in doing this and I already knew the general differences in price structures/labor rates, etc. It worked well in that instance and put together a quick easy report.
I have see other co workers utilize it more heavily and knowing and understanding the data if key. I can spot that something doesn’t look right based on the output vs the reality.
If the younger generation relies too much on it they won’t have the ability to understand the output and if it is truly correct or not. They haven’t been exposed enough to fully understand if it’s correct or if something is off.
I think a lot of the doom hype is propagated by the ai companies themselves to make them seem more valuable. There was a good post about how this has been a slow takeoff to agi, and the gains since chatgpt was released have been linear - not super linear. This is mainly because scaling has been consistently the fix for problems - bigger models, bigger context window, more training data. But those don’t get you to an escape velocity - at any given time they’re all finite. Don’t get me wrong, this is still powerful, but we have a freshman in highschool right now. I would even call it “agi”, just not ASI (artificial super intelligence).
I work with it everyday for coding, research, document comprehension, and more - the current architecture will not reach IAI (independent artificial intelligence) because:
the models don’t learn (update weights) between queries
they memorize the world but don’t comprehend it (arguably) which is why
only scaling has helped these models. That means they memorize and don’t generalize out of distribution
this causes them to always get stuck when solving problems (particularly in coding) and not be able to figure out a way forward at a certain point
This last piece is getting better, but the architecture will always have that problem.
I am seriously worried about gen ai video - that is going to cause chaos…
My area of study was in natural language processing and this remains an unsolved problem. LLMs are really just brute forcing pattern matching that is “good enough”, but they’re not going about it in a way that will lead to AGI.
The lack of understanding is underpinned by the fact that there’s no underlying symbolic representation of real world objects, no knowledge graphs of relations, and so on. Throwing more GPU horsepower and larger data sets into these models won’t solve these underlying issues.
I work on the hardware side of AI. I’ve been in the computer hardware industry for 35 years working for household name mega-corps mostly. Most of the money and focus is on AI growth currently, and this is where all of our customers are investing as heavily as possible…you see the results in the stock market. The reason they are investing is not because of some dream or prediction…they have their own customers who are using this hardware as fast as it can be installed and the more powerful we can make it the better. My main concern is how much power it is all consuming…larger datacenters consuming as much power as large cities and companies exploring nuclear powerplants just to power the datacenters. It is a lot of investment for society and I hope it results in something better than people glued to a screen for an entertainment feed.
My kids are early in high school and trying to find a career direction. I tell them that information and knowledge are commodity items - anything that is known (and not confidential) will be available for free and easily to anyone, the challenge will only be in being able to ask the right questions or describe the problems. What will have value is being able to harness this to create new valuable things, which requires being able to understand needs and desires. There are also increasing opportunities in dealing with ethical issues. Tech is going through an inflection point with a lot of layoffs at major companies the last few years - skills that provided solid careers a decade ago are sadly no longer valued and at least part of that is AI being able to assist in basic development work such that fewer people are needed to crank out code and debug/validate it. On the positive side, I can now develop tools/code/macros in minutes to perform complex manipulations that would have taken me all day to develop, if I even could, previously. My son was preparing for his geometry final…he used AI to give him summary notes for the chapters he needed to study, and to explain to him how to solve the problems he didn’t understand. His learning experience will be very different from mine.
I’ve been a software engineer for 21 years, and it’s no longer a viable career. A member of my team left earlier this year, taking us from a team of 6 to 5, and AI has easily replaced that person. It’s a pretty common pattern when coding something to ask if there’s an example to provide something to start with. Sometimes you’d find one, and if not it would take more time to develop something from scratch. With AI, it can nearly always generate not only an example, but something that’s pretty close. Honestly, AI could replace more people on my team, although I’d never tell my company that. At this point, I’m just going to hold on as long as I can and treat every paycheck like a gift.
To meander into finances, I’ve always been into saving/investing (commonly called FIRE community these days). In the beginning of my career, I did it to not procrastinate saving for retirement until the end. The last 10 years, saving/investing discipline was fueled by wanting to have power over my boss/employer (FU money). Now saving/investing is necessary because AI is coming for my job far sooner than I ever imagined. I’m so thankful I made those decisions to be in a position to weather whatever the status of employment AI puts me in.
On the financial front - it seems like a lot of people who will be losing their jobs are people who were, traditionally, in their peak earning years. If you have had a white collar job for the last twenty years and suddenly find yourself with a nearly worthless skill set due to AI, what do you pivot to?
As you said, ten years ago, software engineering seemed like a smart career choice. Are people seeing the writing on the wall and pivoting from software engineering to other, more stable careers?
How does this affect highschool kids, who typically tailor their senior level classes to a particular university field - they are making a six year decision and investment in their education and their job prospects could look vastly different six years from now with the rapid pace of AI advancement.
I don’t agree with “software engineering is dead” fears. Right now AI is a tool that helps speed up coding. But mostly it’s good for filling in repetitive code and tests, and sometimes replaces “work” I would have done searching google and stackoverflow for answers.
Is it trustworthy enough to use without supervision? Absolutely not. Particularly in security critical work but even just general purpose AI generated code can have significant errors that cost a company money. Experienced engineers take it as a suggestion and then improve it and push it to the finish line themselves.
I do think this is harmful to entry level software engineers because they won’t be given the chance to learn best practices and why.
I also think long term it will be harmful to the adoption of new programming languages. Think back to the days where C and C++ dominated. If AI was available back then, managed languages like Java, Go, C#, whatever… would’ve had a much harder time getting to critical mass. If no one is writing code in the language, there’s nothing to train AI on. If programmers begin to hate working without AI - or are unable - then they’ll stick to languages that AI does well with.
But overall this is just the new fear, whatever. I’ve spent my entire career studying and keeping up-to-date on whatever the current hot thing is just so I can continue to succeed at job interviews. It is a neverending cycle, true before AI, and true after.
That would seem to be dangerous in the 5-10 year time frame when those entry level engineers should be stepping into experienced roles, and experienced engineers are retiring or moving into management or other roles.
No foundation leads to a bleak outlook for the future.
I don’t think so. 75 years ago engineers sat at tables and designed with a compass and slide rule. I even took classes in engineering design with a compass in engineering school in 1987. But nobody does that anymore. It doesn’t make the engineers graduating today any less able to do mechanical drawings. I had lots of classes in mathematics and electrical fields, but nobody actually uses any of that…we have tools that calculate everything much faster and more accurately. Software developers will not need to write low level code…and it’s a pain anyway to mostly be recreating code threads that have all been done before. What matters is being able to understand the problems that need to be solved and develop the specifications for the solution. AI doesn’t do that so well…yet. Tools continue to get better and eliminate the menial tasks, but there will always be new problems that have to be understood and described so we can harness the tools.
You may not need to use the same tools or perform the same steps, but it helps to have learned how those tools work and how the basic processes work and why. It’s what allows you to understand when something doesn’t look quite right and how to fix it, or why a result isn’t quite what you expected.
In my area of expertise for the Navy, we don’t physically fill in work sheets with sonar values and swath widths and system parameters anymore. We have computers that run all the calculations for us. But understanding how all those parameters result in the final answer helps you realize when the answer the software spits out just isn’t right, or could be tweaked better.
My concern with AI is how much power we are consuming with AI datacenters. I work on developing AI hardware. At some point soon we will have to have some regulation on how much energy we are willing to invest, as a society, in what AI is proving to us…do we create a new AI datacenter or do we power Utah? AI is incredibly powerful but it is also about the least efficient way to perform any workload from an energy perspective…using AI for search for example is about 10x the power consumption of Google search.
the book the coming wave by suleyman is excellent - for luddites like myself a realy education on how far we have come and how far there is still to go
So in the past, for music, you had to go to the conservatory and study for years and years. Then someday, you could play in a symphony. And then, when punk rock came along, you could maybe learn three chords in a day—and suddenly there were all these bands. That made it for everybody. How I started in music was punk rock. If you had something to say, you could say it. You didn’t need the expertise or skill set, other than your idea and your ability to convey it. And why coding is the same thing—it’s the punk rock of coding. I think the biggest disconnect that I feel myself is that it’s such a strong tool that can do so much. We need some examples of the different things it can do. Now we’ll see it can make animation that looks like your favorite cartoon, and then you see a million people doing that. That’s one idea. I want to see all the things it could do to understand what’s possible—instead of just, “I’m going to get it to do the same thing everyone else is getting it to do.” I think it’s beyond our scope to understand what it actually can do, and I’m looking forward to some of the people who push the boundaries to see what it can do."
Rick Rubin
I’m not gonna pretend I know fuck what he’s talking about (the only thing I know - and have long forgotten - about anything close to coding, was writing hypertext scripts for websites when I worked at the Ad Agency in the beginning years of the 21st Century), and I’m not gonna just say “that sounds awesome” just because it’s Rick Rubin
I work at a global engineering consulting company that encourages us to us a version of ChatGPT. I see a fair amount of people using it as a calculator and a few using it to write parts of reports. The report writing worries me…ChatGPT can write passably but not well enough that our customers won’t pick up on it And if the human writer uses AI will they know the subject enough for later project phases?
I see sort of the opposite issue with software development. You can ask AI whatever and it will try to be helpful. But if an engineer is asking the wrong questions or guiding it down the wrong path, it will happily dive down a rabbit hole and waste a lot of time.
It’s all statistical correlation, based on your inputs. It will not creatively offer alternatives, even though its training data set might have alternatives buried somewhere.
This is another reason why experienced engineers are still valuable.