I suspect we’ll see training sets for the image-oriented AIs within a year to do this sort of thing. The speed these tools are evolving at is breathtaking.
I was just trying to help somebody yesterday who generated a YAML configuration file with a chat bot and couldn’t get it working. Because it was superficially what they wanted, but the details were all wrong.
While this technology is amazing, I don’t think language models in particular are ever going to be able to write code or solve math problems or that sort of thing, no matter how good they get. Because at the end of the day, they’re not trained for correctness, they’re trained for plausibility. It’s producing something that looks like a response to your prompt, but there’s no understanding of what you asked for or ability to validate the accuracy of the output. It’s like trying to write a novel by using autocomplete.
That’s a major issue with the AI; it can return a solution that’s superficially correct, or even fully functional, but it’s rarely the best solution.
I’ve used ChatGPT to generate Agile user stories and acceptance criteria, and for the most part, they’ve been outstanding. But when I asked it to generate a complex regex statement, it returned something that was functional, but very slow, unoptimized, and unnecessarily convoluted. Still, the fact that it returned anything other than random punctuation and alphabet soup is awe-inspiring to me.
I hear a lot of people confidently stating that “AI will never replace [my job]”, but that feels like whistling in the graveyard. Less than a year ago, narrators were declaring that with contempt, but Apple is now rolling out AI narration for their audiobooks that’s impressively good.
And I think that AI will improve rapidly on a variety of challenges as it’s trained on larger and more specific data sets. It’s going to democratize a lot of industries, and it’s also going to be tremendously disruptive. I wouldn’t be surprised to see legislation attempting to stuff the genie back in the bottle, even as those legislators experiment with AI to draft legislation.
AI’s expert system is pretty much by definition able to solve routine issues. It can have “all the possible case law” ready in moments instead of weeks. Turning that into the best possible metaphor for the case is another matter, but rarely used.
ChatGPT is nearly the opposite of an expert system.
Which is kind of what I was getting at when I emphasized language models in my earlier comment. The achievements of machine learning over the past several years have been impressive, but it’s not the only approach to AI. In particular, it is not an approach based on symbolic reasoning or logical inference: those are the ones that famously fizzled out and tarnished the entire field. But is it the case that deep learning is the only viable technique, or is the fact that we have unimaginable amounts of computing power to throw at these things that has allowed researchers to explore ideas that were previously a dead-end? Could we see the same practical gains with so-called “old-fashioned AI” if anybody wanted to take the risk of working on it?
As far as I can tell, I have not seen stuff that is not “Expert Systems” with increasing computing power. It can be mind boggling to think of the data on hand, but what has been called “hard AI” I have not heard of anything achieving as yet. Plus the sheer inadequacy of any binary based system that each bit is not permanently on cannot reach the singing chorus of a billion cells dancing their connections no matter how many binary bits you have.