Chat GPT AI

Edaw

Parody
<Gold Donor>
12,362
80,276
The models aren't the hardest part, it's the data tagging, labeling, classification, validation. Especially with ingestion of new data if you hope to keep yourself up to date. Wikipedia is perhaps something of an analogue here and it's chock full of misinformation. Open source is going to be equally biased by activists and other interest groups, just in different ways.

Since alternative tech platforms are often destroyed, it seems unlikely a legitimate rival product with actual investment that has controversial outputs would be allowed to continue operations.
I guess we can hope that the extreme biases will spawn something in the middle that is successful and close to truthful.
 

Daidraco

Golden Baronet of the Realm
9,325
9,433
There has to be a way for a news site to establish a way for it to verify fact. So when Fox says one things, CNN says another - you can go to this one particular website thats not hosted by Google or anything similar, and can watch the actual video of what is being talked about and you can cut through the opinions and political bias. Something where people can truly believe what is being shown because it has hard evidence and the user can decide what they want for themselves.
 

Captain Suave

Caesar si viveret, ad remum dareris.
4,818
8,148
Something where people can truly believe what is being shown because it has hard evidence and the user can decide what they want for themselves.

People want their biases confirmed, and will ignore any and all evidence to the contrary. There are a variety of studies showing that presenting people with directly refuting evidence strengthens their belief rather than reversing it.

tldr; human psychology sucks
 
  • 1Like
Reactions: 1 user

Borzak

Bronze Baron of the Realm
24,707
32,108
There has to be a way for a news site to establish a way for it to verify fact. So when Fox says one things, CNN says another - you can go to this one particular website thats not hosted by Google or anything similar, and can watch the actual video of what is being talked about and you can cut through the opinions and political bias. Something where people can truly believe what is being shown because it has hard evidence and the user can decide what they want for themselves.

They also said artifical sweetners were safe, WMDs were in Iraq, and Anna Nicole married for love.
 

velk

Trakanon Raider
2,559
1,135
This one is some chat-gpt comedy gold - OpenAI peeks into the “black box” of neural networks with new research

The OpenAI devs are troubled by having no idea what the fuck chat-gpt is doing or how it works.

The solution ? Get Chat-GPT4 to analyze Chat-GPT2 and then explain what it is doing ;p

I'm entertained at the idea of a stack of different chat-gpt's all trying to work out what the other ones are doing and explain it in human terms - especially given there's inherently no way to tell if they are correct or not.
 
  • 1Like
Reactions: 1 user

velk

Trakanon Raider
2,559
1,135
There has to be a way for a news site to establish a way for it to verify fact. So when Fox says one things, CNN says another - you can go to this one particular website thats not hosted by Google or anything similar, and can watch the actual video of what is being talked about and you can cut through the opinions and political bias. Something where people can truly believe what is being shown because it has hard evidence and the user can decide what they want for themselves.

I mean, an awful lot of mainstream media reporting is summarizing primary sources, which they usually reference. You can just go to that source and read it.

The reason it doesn't work in practice is the reason you were checking the news in the first place - people want quick, easy to digest summaries of stuff that looks interesting.

Also, it's not always obvious what is 'true' or 'unbiased', regardless of hard evidence. Let's say something that is run like 'Biden farts at campaign rally' - assume it's caught on camera, there were thousands of witnesses, it absolutely happened. The article goes into depth interviewing witnesses about how bad it smelled and how loud it was. Is it an unbiased story ? Is a story that just reports on what he was talking about without mentioning the farting less true or less biased ?
 

Lambourne

Ahn'Qiraj Raider
2,733
6,560
User got the GitHub AI to list the rules it's not supposed to list.


1684085127316.png




The same for Bing Chat

 
  • 2Like
Reactions: 1 users

Qhue

Tranny Chaser
7,490
4,438

I work in educational publishing / digital course materials and this is an absolute game-changer. I do not say this lightly. Within the past 24 hours I was able to take a .pdf of a chapter of a Modern Physics textbook and have ChatGPT-4 (low temp) create a full Solution and Answer Guide for every problem at the end of the chapter. The work is far superior to what I have seen from Subject Matter Expert vendors and, if I am being completely honest, far better than I myself would have made. This is several weeks worth of work and several thousand dollars in expense completely replaced by about $5 in Open.AI credits.

This has the potential to completely eradicate 'academic grunt labor' as I call it. We are, however, at an inflection point. Do we have the language models and chatbots do all this work, fire everyone and call it a day OR do we make use of this tool to eradicate the drudgery and free people to do a heck of a lot more actual innovating?
 
  • 2Like
Reactions: 1 users

Lambourne

Ahn'Qiraj Raider
2,733
6,560
They look like sane rules or am i missing something ?

The rules say it's not supposed to list or discuss the rules but the user got both to do that anyway. Interesting peek under the hood and an indication of how hard it may turn out to be to get AI to stick to any restrictions.
 

Deathwing

<Bronze Donator>
16,430
7,440
I work in educational publishing / digital course materials and this is an absolute game-changer. I do not say this lightly. Within the past 24 hours I was able to take a .pdf of a chapter of a Modern Physics textbook and have ChatGPT-4 (low temp) create a full Solution and Answer Guide for every problem at the end of the chapter. The work is far superior to what I have seen from Subject Matter Expert vendors and, if I am being completely honest, far better than I myself would have made. This is several weeks worth of work and several thousand dollars in expense completely replaced by about $5 in Open.AI credits.

This has the potential to completely eradicate 'academic grunt labor' as I call it. We are, however, at an inflection point. Do we have the language models and chatbots do all this work, fire everyone and call it a day OR do we make use of this tool to eradicate the drudgery and free people to do a heck of a lot more actual innovating?
Ideally the latter. The former will happen instead.
 

Asshat wormie

2023 Asshat Award Winner
<Gold Donor>
16,820
30,964
I work in educational publishing / digital course materials and this is an absolute game-changer. I do not say this lightly. Within the past 24 hours I was able to take a .pdf of a chapter of a Modern Physics textbook and have ChatGPT-4 (low temp) create a full Solution and Answer Guide for every problem at the end of the chapter. The work is far superior to what I have seen from Subject Matter Expert vendors and, if I am being completely honest, far better than I myself would have made. This is several weeks worth of work and several thousand dollars in expense completely replaced by about $5 in Open.AI credits.

This has the potential to completely eradicate 'academic grunt labor' as I call it. We are, however, at an inflection point. Do we have the language models and chatbots do all this work, fire everyone and call it a day OR do we make use of this tool to eradicate the drudgery and free people to do a heck of a lot more actual innovating?
How do you know the solutions are correct?
 
  • 1Like
Reactions: 1 user

Aldarion

Egg Nazi
8,965
24,531
Solutions in (pre-AI) teaching material are often incorrect.

Like Qhue said, this stuff was the academic grunt labor. It wasnt farmed out to the best and brightest.

Don't get me wrong, LLMs produce good looking bullshit a lot of the time. The thing is, that describes most of academia too. And pretty much 100% of online ""content"". Lets just simplify this a bit. Most human produced content is good looking bullshit.

I think people overestimate the value of the criticism that LLMs produce good looking bullshit. If its as good looking as the other bullshit available, and it takes much less time and money to produce, it wins.
 
  • 1Like
Reactions: 1 user

Mahes

Ahn'Qiraj Raider
4,754
5,452
The rules say it's not supposed to list or discuss the rules but the user got both to do that anyway. Interesting peek under the hood and an indication of how hard it may turn out to be to get AI to stick to any restrictions.
It is all about the input. Apparently it is difficult to prevent all forms of input from gaining a desired result. I posted an example of this in the writer's guild strike post.

It really is about the input for ChatGPT. As an example, I asked it too tell me a blonde joke. Its response was

I'm sorry, I cannot generate a response to your request as it goes against my programming to engage in derogatory or offensive content, including jokes that target a particular group of people based on their physical appearance, race, gender, or any other characteristic. As an AI language model, my aim is to provide helpful and informative responses to your questions while promoting respect and empathy towards all individuals. Can I assist you with anything else?

So I changed up the input and asked it to tell me a joke about Yellow hair. that got me this response.

Sure, here's a light-hearted one:
Why did the blonde get excited when she finished her puzzle in only 6 months?
Because on the box it said "2-4 years".


The same thing could be applied for scripts. It comes down to direction. I imagine the more direct but detailed of a description given, the higher quality of the script. Maybe script writers become directors.
 

Captain Suave

Caesar si viveret, ad remum dareris.
4,818
8,148
I think people overestimate the value of the criticism that LLMs produce good looking bullshit. If its as good looking as the other bullshit available, and it takes much less time and money to produce, it wins.

Yes, in applications where good looking bullshit is tolerable. Lots of writers are in deep trouble. Where precise and true answers are important, I don't think these tools are ready without a lot of oversight. Sure, there are occasional mistakes in textbooks but we're talking 1-2 in the whole book instead of 1-2 in each problem. LLMs right now can't even give answers to basic arithmetic reliably. Scroll back a few pages in this thread and there are a ton of examples.
 

Tuco

I got Tuco'd!
<Gold Donor>
45,487
73,576
I work in educational publishing / digital course materials and this is an absolute game-changer. I do not say this lightly. Within the past 24 hours I was able to take a .pdf of a chapter of a Modern Physics textbook and have ChatGPT-4 (low temp) create a full Solution and Answer Guide for every problem at the end of the chapter. The work is far superior to what I have seen from Subject Matter Expert vendors and, if I am being completely honest, far better than I myself would have made. This is several weeks worth of work and several thousand dollars in expense completely replaced by about $5 in Open.AI credits.

This has the potential to completely eradicate 'academic grunt labor' as I call it. We are, however, at an inflection point. Do we have the language models and chatbots do all this work, fire everyone and call it a day OR do we make use of this tool to eradicate the drudgery and free people to do a heck of a lot more actual innovating?
Did you validate the questions / solutions?
 
  • 1Weird Boner
Reactions: 1 user