Chat GPT AI

pharmakos

soʞɐɯɹɐɥd
<Bronze Donator>
16,306
-2,236
The goal would then be to give it its own forum account and allow it free access to post on the board X number of times per day.

Maybe have it follow Folder around.
 

Daidraco

Golden Baronet of the Realm
9,325
9,433
The lawyer stuff is interesting, but afaik, Its not being trained on any of that data. Once access to a lawyer trained API is available, then the conversation will change dramatically. As we're working with something that.. as has been said.. is full of shit. I tried to use it for all the court documents when it comes to property management, and it gets them right for the "most" part. But some shit has to be presented in Virginia on certain documents and it completely misses those requirements.

In a way, Im sure theyre already doing it - but I imagine Chat GPT ending up similar to the base game (in a comparison) and mod/addons will come out for it that have to be licensed through chat gpt. Giving that addon maker (api, w/e) access to chatgpt to train it and mold it how they see fit with the legal jargon to put any rampant racism responsibility upon them. etc. Chat GPT, the company, continuously updating the base AI to become a better tool, over and over.
 
  • 2Like
Reactions: 1 users

Daidraco

Golden Baronet of the Realm
9,325
9,433
At least do a spoiler of the article so we dont have to play with script kiddies or pay to support liberal fags.

Geoffrey Hinton was... (First spoiler is just journalism 101 word count increase, scroll down.)
an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industry’s biggest companies believe is a key to their future.
On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.
Dr. Hinton said he has quit his job at Google, where he has worked for more than decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.
“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Dr. Hinton said during a lengthy interview last week in the dining room of his home in Toronto, a short walk from where he and his students made their breakthrough.

Dr. Hinton’s journey from A.I. groundbreaker to doomsayer marks a remarkable moment for the technology industry at perhaps its most important inflection point in decades. Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education.
But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.
“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.

A New Generation of Chatbots​

Card 1 of 5
A brave new world. A new crop of chatbots powered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning today’s powerhouses into has-beens and creating the industry’s next giants. Here are the bots to know:
ChatGPT. ChatGPT, the artificial intelligence language model from a research lab, OpenAI, has been making headlines since November for its ability to respond to complex questions, write poetry, generate code, plan vacations and translate languages. GPT-4, the latest version introduced in mid-March, can even respond to images (and ace the Uniform Bar Exam).
Bing. Two months after ChatGPT’s debut, Microsoft, OpenAI’s primary investor and partner, added a similar chatbot, capable of having open-ended text conversations on virtually any topic, to its Bing internet search engine. But it was the bot’s occasionally inaccurate, misleading and weird responses that drew much of the attention after its release.
Bard. Google’s chatbot, called Bard, was released in March to a limited number of users in the United States and Britain. Originally conceived as a creative tool designed to draft emails and poems, it can generate ideas, write blog posts and answer questions with facts or opinions.
Ernie. The search giant Baidu unveiled China’s first major rival to ChatGPT in March. The debut of Ernie, short for Enhanced Representation through Knowledge Integration, turned out to be a flop after a promised “live” demonstration of the bot was revealed to have been recorded.

After the San Francisco start-up OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of new systems because A.I technologies pose “profound risks to society and humanity.”
Several days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, released their own letter warning of the risks of A.I. That group included Eric Horvitz, chief scientific officer at Microsoft, which has deployed OpenAI’s technology across a wide range of products, including its Bing search engine.

Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job. He notified the company last month that he was resigning, and on Thursday, he talked by phone with Sundar Pichai, the chief executive of Google’s parent company, Alphabet. He declined to publicly discuss the details of his conversation with Mr. Pichai.

Google’s chief scientist, Jeff Dean, said in a statement: “We remain committed to a responsible approach to A.I. We’re continually learning to understand emerging risks while also innovating boldly.”
Dr. Hinton, a 75-year-old British expatriate, is a lifelong academic whose career was driven by his personal convictions about the development and use of A.I. In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network. A neural network is a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea. But it became his life’s work.
In the 1980s, Dr. Hinton was a professor of computer science at Carnegie Mellon University, but left the university for Canada because he said he was reluctant to take Pentagon funding. At the time, most A.I. research in the United States was funded by the Defense Department. Dr. Hinton is deeply opposed to the use of artificial intelligence on the battlefield — what he calls “robot soldiers.”
In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.

Google spent $44 million to acquire a company started by Dr. Hinton and his two students. And their system led to the creation of increasingly powerful technologies, including new chatbots like ChatGPT and Google Bard. Mr. Sutskever went on to become chief scientist at OpenAI. In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.
Around the same time, Google, OpenAI and other companies began building neural networks that learned from huge amounts of digital text. Dr. Hinton thought it was a powerful way for machines to understand and generate language, but it was inferior to the way humans handled language.
Then, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others. “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”
As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”

Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.
His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”
He is also worried that A.I. technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that.”
Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own. And he fears a day when truly autonomous weapons — those killer robots — become reality.
“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

Many other experts, including many of his students and colleagues, say this threat is hypothetical. But Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.
But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.
Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”
He does not say that anymore.

He isnt saying anything anyone else hasnt said already. Its really a pointless article and just a wall of text trying to give credence to the "concern" about AI. The only thing people "should" be worried about is the person behind the curtain telling the AI what the facts are, as he highlights.
 
  • 1Like
Reactions: 1 user

Palum

what Suineg set it to
23,617
34,172

While Google may die, what's the proposition here exactly from a commercial standpoint?

Search engines were ruined by ads, ranking games and ultimately arbitrary ideological agendas making it purposefully less useful to users in order to monetize it. GPT4 is exceptionally expensive compute wise compared to a traditional search engine, and fundamentally you don't lose the backend services if you want to keep the information current.

Do people think public Chatbots won't be completely compromised by the same commercial forces that have in the last two decades made search engine services actively worse? If you think asking it how to remove a tomato juice stain isn't going to suggest you use Tide, advertising sponsor, I'm just curious why you believe this time things will be different.
 
  • 2Like
Reactions: 1 users

Borzak

Bronze Baron of the Realm
24,707
32,108
Saw one they asked it to do a simple beam deflection calculation. It asked the load, beam size and such. Then made the standard boilerplate it's not my fault if you give me wrong info. Then it explained the process and the actual formula it pulled a variable with a variable name nobody had ever heard of and doesn't appear in the AISC steel manul. Needs tuning. Also that's the very basic end, the more involved end is working back to get the beam size to support X load under Y condition and not a simple Y/No.
 
  • 1Like
Reactions: 1 user

Sanrith Descartes

Von Clippowicz
<Aristocrat╭ರ_•́>
41,536
107,628
Saw one they asked it to do a simple beam deflection calculation. It asked the load, beam size and such. Then made the standard boilerplate it's not my fault if you give me wrong info. Then it explained the process and the actual formula it pulled a variable with a variable name nobody had ever heard of and doesn't appear in the AISC steel manul. Needs tuning. Also that's the very basic end, the more involved end is working back to get the beam size to support X load under Y condition and not a simple Y/No.
Yeah, it's really shitty at math. Even basic shit. Fucker was trained on common core math.
 

velk

Trakanon Raider
2,559
1,134
While Google may die, what's the proposition here exactly from a commercial standpoint?

Search engines were ruined by ads, ranking games and ultimately arbitrary ideological agendas making it purposefully less useful to users in order to monetize it. GPT4 is exceptionally expensive compute wise compared to a traditional search engine, and fundamentally you don't lose the backend services if you want to keep the information current.

Do people think public Chatbots won't be completely compromised by the same commercial forces that have in the last two decades made search engine services actively worse? If you think asking it how to remove a tomato juice stain isn't going to suggest you use Tide, advertising sponsor, I'm just curious why you believe this time things will be different.

Sure, it'll probably end up shit eventually, but for now, Microsoft isn't an advertising company like Google is - they currently have no incentive to shit up the search results to make more money.

This is impossible for Google - as you pointed out, anything they do to make search less biased means they make less money, so they spend a lot on fancy search to earn less.
 

Control

Ahn'Qiraj Raider
2,271
5,764
While Google may die, what's the proposition here exactly from a commercial standpoint?

Search engines were ruined by ads, ranking games and ultimately arbitrary ideological agendas making it purposefully less useful to users in order to monetize it. GPT4 is exceptionally expensive compute wise compared to a traditional search engine, and fundamentally you don't lose the backend services if you want to keep the information current.

Do people think public Chatbots won't be completely compromised by the same commercial forces that have in the last two decades made search engine services actively worse? If you think asking it how to remove a tomato juice stain isn't going to suggest you use Tide, advertising sponsor, I'm just curious why you believe this time things will be different.
Oddly enough (or maybe not), I don't think it was commercial forces that ruined google, it was ideological ones. It made a literal mountain of money just putting a few ads on the side of your search results. It's only once its ideological bias started to run unchecked that it shit the bed. Of course, the same bed-shitting is embedded in these AI models from the start, but it's not commercial forces pulling the strings.
 

Palum

what Suineg set it to
23,617
34,172
Oddly enough (or maybe not), I don't think it was commercial forces that ruined google, it was ideological ones. It made a literal mountain of money just putting a few ads on the side of your search results. It's only once its ideological bias started to run unchecked that it shit the bed. Of course, the same bed-shitting is embedded in these AI models from the start, but it's not commercial forces pulling the strings.

It's all interconnected, I'm not disagreeing the poison pill isn't being forced in early this time though. I just am curious for alternate opinions to my cynicism having gone through these product cycles so many times since the mid 90s.
 
  • 1Like
Reactions: 1 user

Edaw

Parody
<Gold Donor>
12,361
80,275
It's all interconnected, I'm not disagreeing the poison pill isn't being forced in early this time though. I just am curious for alternate opinions to my cynicism having gone through these product cycles so many times since the mid 90s.
Opensource. A lot of people will continue to use filtered, packaged products but the ability to opensource an unfiltered search engine with AI will have an impact. The search engine will become an app, or an add-on app like adblock. Imagine being able to filter out lists of 'ad', 'sponsored' or other biased results, and only getting truly relevant ones.
 
Last edited:

Captain Suave

Caesar si viveret, ad remum dareris.
4,818
8,148
While Google may die, what's the proposition here exactly from a commercial standpoint?

Maybe there's an opportunity here to step away from the "you are the product" corrosive ad model. I might actually subscribe to a useful, ad-free, truly private search/text concierge service.
 
Last edited:
  • 2Like
Reactions: 1 users

Control

Ahn'Qiraj Raider
2,271
5,764
It's all interconnected, I'm not disagreeing the poison pill isn't being forced in early this time though. I just am curious for alternate opinions to my cynicism having gone through these product cycles so many times since the mid 90s.
I guess I was just saying that there are still literal mountains of money to be made commercializing in relatively non-obtrusive ways. The potential is there, although I find it unlikely that any big company (or vc funded startup for that matter) will actually leverage it.
 
  • 1Like
Reactions: 1 user

Palum

what Suineg set it to
23,617
34,172
Opensource. A lot of people will continue to use filtered, packaged products but the ability to opensource an unfiltered search engine with AI will have an impact. The search engine will become an app, or an add-on app like adblock. Imagine being able to filter out lists of 'ad', 'sponsored' or other biased results, and only getting truly relevant ones.

The models aren't the hardest part, it's the data tagging, labeling, classification, validation. Especially with ingestion of new data if you hope to keep yourself up to date. Wikipedia is perhaps something of an analogue here and it's chock full of misinformation. Open source is going to be equally biased by activists and other interest groups, just in different ways.

Since alternative tech platforms are often destroyed, it seems unlikely a legitimate rival product with actual investment that has controversial outputs would be allowed to continue operations.
 
  • 1Like
Reactions: 1 user