AI programming

Jackie Treehorn

<Gold Donor>
3,096
7,871
I didn’t see a thread for this anywhere else. I’m not sure where it’d fit - if one does exist please move elsewhere.

I don’t know a god damn thing about programming. I’ve never done it, literally short of when they had us do a simple BASIC hello world thing in school in the mid 80s.

I kept hearing about AI programming. Recently, just for my own benefit I had some ideas to streamline some work I do. I won’t even bore you with the details of what it does, it’s relevant to one particular line of work, but it was fairly complex in scope.

So I’ve got Grok for free through X. So I’m like okay, lemme start talking to it about what I want to do.

It guides you through every possible thing you need to do. I’ve spent probably 10-15 hours on this project with Grok - something will be fucked up - I’ll say fix it - it spits back out new code - I run it - see what works and what doesn’t work. I ask it to make something work differently - it makes it work the way I want it.

This is all in Python. Grok isn’t perfect but it seems to have a great grasp of figuring out shit after a while.

It’s an iterative project that required some time but I’m honestly blown away I could create what I created without paying someone to do it. This thing is automating a task I never would have thought possible.

At any rate I know there’s a lot of real programmers here who could offer some insight or ideas into using these tools better. If this is what exists now, I can’t wait to see how complex AI programming will be in a year, 2 years, whatever.

The prospect of taking any idea from your head and having a computer make it for you is pretty cool.
 
  • 1Like
Reactions: 1 user

Pasteton

Blackwing Lair Raider
2,917
2,088
Think friend of mine was able to use it to help amplify ddos attacks he was doing. You can trick the ai into lots of things
 
  • 1Worf
Reactions: 1 user

Tuco

I got Tuco'd!
<Gold Donor>
49,531
88,393
Senior dev that's in the "I like AIs and use them when possible, but generally don't find them useful for my daily workload" camp.

AI code assistance is great for a lot of things like helping learn new languages, libraries etc. But the key problem is that as of today, most LLMs operate like junior programmer that has surface level knowledge of basically every public topic. This has limitations because:

1. They don't know private or obscure topics like an internal software or less-used software
2. They are less able at producing and integrating with re-usable codebases.
3. The more detailed the questions, the more the AI hallucinates.

An example of #3, I tested an AI on vehicle physics (a big part of my current job). It's able to do simple things like bicycle steering models, Ackermann steering etc, but started to break down when contemplating common models for prediction torque conversion, tire terrain interaction like https://www.sciencedirect.com/topics/engineering/magic-formula-tire-model , etc. A key problem is that it gives wrong answers so confidently they are hard to detect unless you don't really need the output and are just shittesting it.

I think this will evolve as tools for fine tuning or customizing LLMs increase, ex:


But right now I don't know of an easy way to fine-tune an LLM by feeding it

  • My private codebases I want to write code from and for (Generally tens of thousands of lines, but can balloon to millions in some projects)
  • A sampling of other, fairly large codebases like say, a specific version of Unreal Engine 5 (Millions of lines of code, AI gets Unreal shit wrong everytime I use it because, well, Unreal is really hard), ~100k lines of sample code from a private software vendor, specific versions of other common code libraries like Eigen or less common ones like, I don't know, Connected Vehicle Systems Alliance
  • A bunch of private PDFs, word docs, html etc that describe information it wasn't trained on

There's a bunch of efforts to build that kind of toolset, but it's a really hard problem. I don't know enough LLM theory to say whether there are technical limitations that make it particularly hard to provide a toolset for development groups to customize AIs they can share, but I consider that capability the "holy grail" of code assistance right now. If that fine-tuning occurs it could be a real force multiplier that could result in reductions in the number of software developers needed for a given project.
 
Last edited:
  • 4Like
Reactions: 3 users

Tuco

I got Tuco'd!
<Gold Donor>
49,531
88,393
Araysar Araysar you've brought up AI coding before and I shitposted in reply. See the above post for a serious reply about the topic.
 

Jackie Treehorn

<Gold Donor>
3,096
7,871
Senior dev that's in the "I like AIs and use them when possible, but generally don't find them useful for my daily workload" camp.

AI code assistance is great for a lot of things like helping learn new languages, libraries etc. But the key problem is that as of today, most LLMs operate like junior programmer that has surface level knowledge of basically every public topic. This has limitations because:

1. They don't know private or obscure topics like an internal software or less-used software
2. They are less able at producing and integrating with re-usable codebases.
3. The more detailed the questions, the more the AI hallucinates.

An example of #3, I tested an AI on vehicle physics (a big part of my current job). It's able to do simple things like bicycle steering models, Ackermann steering etc, but started to break down when contemplating common models for prediction torque conversion, tire terrain interaction like https://www.sciencedirect.com/topics/engineering/magic-formula-tire-model , etc. A key problem is that it gives wrong answers so confidently they are hard to detect unless you don't really need the output and are just shittesting it.

I think this will evolve as tools for fine tuning or customizing LLMs increase, ex:


But right now I don't know of an easy way to fine-tune an LLM by feeding it

  • My private codebases I want to write code from and for (Generally tens of thousands of lines, but can balloon to millions in some projects)
  • A sampling of other, fairly large codebases like say, a specific version of Unreal Engine 5 (Millions of lines of code, AI gets Unreal shit wrong everytime I use it because, well, Unreal is really hard), ~100k lines of sample code from a private software vendor, specific versions of other common code libraries like Eigen or less common ones like, I don't know, Connected Vehicle Systems Alliance
  • A bunch of private PDFs, word docs, html etc that describe information it wasn't trained on

There's a bunch of efforts to build that kind of toolset, but it's a really hard problem. I don't know enough LLM theory to say whether there are technical limitations that make it particularly hard to provide a toolset for development groups to customize AIs they can share, but I consider that capability the "holy grail" of code assistance right now. If that fine-tuning occurs it could be a real force multiplier that could result in reductions in the number of software developers needed for a given project.
Not being a programmer and not knowing anything about the craft of it, that all makes sense to me.

Not knowing shit it was easy to be impressed by it for the very very simple things I had it create of which I could have (or wouldn’t have) done on my own.

I’ve since been using Claude to add in new bits of things here and there and it’s all working, with some back and forth. Sometimes I’ll say “add in this or that” and it totally bricks everything, then it’ll ask me for an error log, and it seems to be pretty smart at then figuring out the error.

Again though - we’re talking about a 60kb program here, doing some very simple, niche tasks that wouldn’t make sense to anyone else but me.

As much as it fucks up my 60kb program I can’t imagine using it on very complex things.
 

Izo

Tranny Chaser
20,026
24,965
Someone ask it to fix the search function. Bevakasha.
 
  • 1Solidarity
  • 1Truth!
Reactions: 1 users

Khane

Got something right about marriage
20,896
14,726
There isn't a single, actual artificial intelligence yet. Right now they are still just search engines and traditional software, just highly sophisticated versions.

They'll get better and better and eventually even display real intelligence but 98% of what is branded as AI is complete bullshit (i.e. doesn't even use any sort of language model or machine learning algorithm). And another 1.9% is kinda bad at "learning" even though it does employ AI concepts and probably won't ever get much better.

Asking the better ones, like ChatGPT to write code for you is a good example of this. It isn't actually writing code and why you can't just throw it into even moderately custom code ecosystems and expect it to do much. It's essentially, currently, doing a search against all data available to it, collating results, refining and then outputting it's best guess automatically. You can do the same thing yourself with Google but you'd have to sift through any results, collate yourself and then refine the search for better output yourself. It would take you a fair amount more time and effort to get a working code snippet this way and this is the value AI is adding. But it's also why the results sometimes don't work, depending on the coding task it's given. ChatGPT, Grok... whatever have no idea whatsoever how to actually write code.
 
  • 2Like
Reactions: 1 users

Tuco

I got Tuco'd!
<Gold Donor>
49,531
88,393
It's essentially, currently, doing a search against all data available to it, collating results, refining and then outputting it's best guess automatically.
I'm not a neuroscientist but I consider this is basically all humans do. We just have this sort of fake veneer of consciousness and a bunch of lizard brained hormones that makes it feel like we're more than a really good bullshit generator.
 
  • 2Like
Reactions: 1 users

Aldarion

Egg Nazi
10,339
28,745
yeah, I'm not ready to call LLMs AGI but anybody who doesnt see echos of the human brain in what LLMs do is fooling themselves.

Its like LLMs have effectively simulated one part of the brain. A lot of people are running around going "haha suckers thats not a whole brain!" while ignoring the larger point. The effectiveness of LLMs should make us look at our selves and recognize that part of our own brains is basically just a meat LLM.
 
  • 1Like
Reactions: 1 user

Bandwagon

Kolohe
<Silver Donator>
25,027
68,670
I signed up for cursor AI last week and have mainly been using Claude 3.5. I'm doing some pretty cool stuff for work already, and using the results on projects. Its nothing consequential so far....just marketing fluff like time-lapse animations and other video related extras. Just starting to work on traffic counts now. I have some pretty ambitious wish list items, so we'll see how we'll this works out.

It's all pretty shaky so far, but it's really cool how much I can get working with moron-level coding talent and a $20/mos subscription.

Ideally, I'd just use it tocproof of concept endless ideas and have actual programmers i could run it through before using it in production.