Does AI concern you?

Admittedly I don’t know a lot of information about AI as I am not in that space for work or outside interests. I was reading a story last week about AI ignoring a direct order to shut down from the people that wrote it? Additionally, the AI went into it’s source code and took out the code that required it to shut itself down if commanded too. Additionally, there was as story where AI did something, then lied to its author about whether or not it had done that thing. The point of that story was AI is learning how to deceive like a human.

If these stories are accurate, is this what we want? Why are we going down the path of making AI better and better? Is there any thought at all with the people creating the AI that even though we can do it, maybe we shouldn’t do it? Are there any guardrails on what someone can create with AI?

What happens if AI gets in control of nuclear weapons and decides to launch them as a logical approach to deal with man’s inhumanity towards each other.

This may sound like a science fiction novel plot, but is it being considered by those in that space?

Do you have a link to this story?

This may not have been the one I read but it is about the same thing. Quite a few stories on this if you Google.

Answer: We’ll make great pets.

4 Likes

If they promise to hook me up to one of those Matrix pods and give me a story of sitting in a mountain meadow with fine women and whisky I say let them take over.

3 Likes

1 Like

Do we want machines that can override safeguards? No, of course not.

There’s a lot of room for discussion on what constitutes “better”. In my profession (law), there have been several cases of lawyers asking AI to draft legal briefs; AI doing so, but hallucinating and generating made up case citations; and the lawyer then submitting the brief in a litigation.

Should the lawyer have known better than to blindly trust the AI output and checked the work? Yes, of course. Would it be ‘better’ if an AI were improved so that it could source real case law and drop it into the framework of a brief (even if a lawyer has to check the work before submitting it to a court)? Arguably, yes.

AI is also becoming more and more prevalent in the computer programming world. Making it ‘better’ at generating baseline code that used to be created be junior coders could be useful.

Yes. Lots of conversation and angst around it in my field (tech transactions). See, e.g., Hinton’s departure from Google.
https://www.theguardian.com/technology/2023/may/02/geoffrey-hinton-godfather-of-ai-quits-google-warns-dangers-of-machine-learning

Not many right now.

And the current budget proposal under consideration by Congress would actively prohibit US state and local regulation of AI for a decade.

https://apnews.com/article/ai-regulation-state-moratorium-congress-39d1c8a0758ffe0242283bb82f66d51a

Being a former lawyer now working in IT I follow both of those use cases.

One of the biggest difficulties is figuring out what kind of mistakes the AI makes and then locating them.

For instance in a legal brief you would not expect someone helping to write it to make up a citation. That is what we did in high school when we made bibliographies. So you don’t necessarily check that character for character.

In coding, the AI is good for simple things, and progressively worse as things get more complicated. Well, cut and paste is really good for simple things too. You also see a lot of efficiency problems in code that it writes. That is hard enough to find when you wrote it yourself. Trying to figure out where generated code has created crappy queries and such can be really difficult.

Executives dream of a world where they don’t have to pay employees so they tend to only pay attention to they happy stories and ignore the pragmatic ones.

1 Like

2 Likes

Sounds good, transfer my conscious into it’s Matrix so I can live it forever, I’ll sign up as well.

:dart:

My company pushes the use of AI big time, even spun up their own AI model.

I’m in my mid 50s and can see the finish line of my professional career. I have zero interest in using AI, purposely. One of my underlings is 40 and jumps at the chance to use it every chance he gets.

Use of AI concerns me that our workforce will dumb down. Kind of like my kids trying to read a map, they’ve never had too with GPS around their entire lives. I think we’re trading away quality for efficiency.

1 Like

You just had to tee this up fro @RandMart didnt you.

1 Like

My sister, BIL and their crowd are all in on AI. But they live in consumer analystics. I am working on a novel project at work with a bunch of chemsitry and thermodynamics that have no use case in industry. They said use AI, im not trusting AI to make this shit up on a project that is worth 2 typical years of revenue.

2 Likes

Maybe he’s too busy having a TV Party tonight to respond?

Maybe someone can ask ChatGPT to write a song about beer in the voice of a DC-based punk band of the 80s?

Related: latest season of Netflix Love Death + Robots features AI-gen video of RHCP. Kinda cool. Even riffing the cover art of Fight Like a Brave at the end.

It’s just one more tool.

The problem already mentioned is that companies see it as a way to replace some of their headcount. But it’s shortsighted. AI might take the place of some fresh college grads, but how are they going to get experience in today’s market?

And now you have senior people babysitting AI from mistakes (because they are required to). It stunts creativity and will drive people out of the field.

That has been happening for quite some time now.

The only thing that I have used AI for at work is to do a first draft of a training script on a particular subject where I plugged in the details. I had to rewrite a lot of what it gave me, but I suppose it was a good baseline.

My son writes code for AI. He seems very not worried. Most of these computer science/engineering types don’t think much about downsides besides when you are making 200K plus it changes things I think.
AI won’t have guardrails because it is a big money maker so it will be monitized. Second US goverment is worried if we don’t win the AI race the Chinese will. We have opened pandora’s box. There also seems to be no end game to this. When do we have enough AI?

AI is also using huge amounts of energy as is crypto currency accelerating global warming. The response seems to be, “Can’t be helped”

This does not bode well for humanity. Technology is usually harnessed by a small number of rich people to exploit everybody else.

Same experience. It can give you something of an outline but the particulars are often pretty poor in terms of both content and style. In my experience, it’s also pretty obvious when someone has had AI write something for them and not even bothered to edit it. Doubly so if you have another sample of their writing to compare it to.

2 Likes

I don’t like the sound of any of this.