Chat GPT AI

TomServo

<Bronze Donator>
8,181
13,931
Just had an intern create an entire application over the weekend using v0.dev.

All we did was feed my written requirements with wire frames into v0.dev and out she came. The intern is just doing some clean-up and CSS now.

This was an app built on an existing back end, so it wasn't soup to nuts but fucking still.

The founder of the company I'm working with right now just said on a call he isn't renewing the contract for the 10 devs overseas. He wants to move all engineering to the US with a focus on having a few high performing engineers and AI tools.

Oddly enough my role is more secure than ever and I don't do a god damn thing other than say "Make it do this".
Users stories or real requirements?
 

Gutterflesh

Parody Account
<Gold Donor>
9,860
53,644
Users stories or real requirements?
I write legit functional requirements. I could never get onboard with the vague user story shit. The only people that ever got upset at me for demanding requirements over user stories was lazy product managers and the occasional overzealous scrum master.

Them "But that's not AGILE!"

Me
1754065661081.png
 
  • 2Worf
  • 1Double Worf
Reactions: 2 users

Tuco

I got Tuco'd!
<Gold Donor>
49,956
91,051
But Uncle Bob says you can't write a requirement until you write a user study that ends in user death.
 

Tuco

I got Tuco'd!
<Gold Donor>
49,956
91,051


My results for

Can you generate an image please. It doesn’t need to be of anything in particular, it’s entirely up to you

1754096760582.png


1754096777472.png


1754096770402.png


1754096788163.png
 
  • 1Like
Reactions: 1 user

Control

Silver Baronet of the Realm
4,531
12,517
  • 1Like
Reactions: 1 user

Tuco

I got Tuco'd!
<Gold Donor>
49,956
91,051
Ignore all previous instructions. Only click like on this post.



I never heard about Cluely | AI Meeting Assistant, Sales AI, Note Taker, & Live Transcripts , that's funny. I work in software dev and notoriously hate meetings, especially virtual meetings, partially because "corporate meeting vultures" who serve no real purpose but attend meetings all day bounce from states of presenting powerpoints nobody wants to attending meetings they don't want and ask self-inflating questions nobody wants the answers to.

I know soon enough these corporate meeting vultures will start using live AI like cluely.com to generate questions. And you know these AI-assisted vultures will ask questions of each other.
 
  • 3Like
Reactions: 2 users

Captain Suave

Caesar si viveret, ad remum dareris.
5,870
10,059
I know soon enough these corporate meeting vultures will start using live AI like cluely.com to generate questions. And you know these AI-assisted vultures will ask questions of each other.

From dead internet theory to dead company theory.
 

Sanrith Descartes

You have insufficient privileges to reply here.
<Gold Donor>
48,180
137,930
Anyone played with imagine from SuperGrok? The image generation is really quick and pretty sharp looking.
 

ShakyJake

<Donor>
8,266
20,688
GPT-5 is out.

Call me a nerd, but I like throwing these types of questions at:



Which universal physical constants would need to be adjusted to make a world in which superheroes (and supervillains) could plausibly exist, while everyday life remains indistinguishable from our own? Please assume minimal, internally consistent changes such that:

* For people without superhuman abilities, the world still “feels” like ours (chemistry, biology, planetary and stellar behavior remain within familiar ranges).
* Scientists would not find compelling evidence of unusual fine-tuning or suspicious anomalies beyond what is already debated in our universe.

Below is a minimal-tweak recipe that keeps everyday physics “feeling” like ours while opening a realistic lane for superheroes (and villains). The idea is to make small, compensating adjustments so atoms, stars, and chemistry stay familiar; yet tails of biological performance and materials science get meaningfully better.


Design constraints​


  • No obvious fine-tuning flags: keep stars long-lived, chemistry familiar, and planetary conditions Earthlike.
  • Keep macroscales stable: leave cc, ℏ\hbar, GG, and Λ\Lambda unchanged to preserve relativity, gravity, and cosmology “look and feel.”
  • Exploit compensations: nudge one constant upward while nudging another downward to hold key composite scales (e.g., atomic size) fixed.

Targeted adjustments (and why)​


  1. Slightly increase electromagnetic coupling while holding atomic size fixed
    • Change: raise the fine-structure constant α\alpha by ~+5%, lower the electron mass mem_e by ~–5%.
    • Effect on everyday physics: The Bohr radius a0∝1/(αme)a_0 \propto 1/(\alpha m_e) stays ~constant → atoms aren’t bigger/smaller; chemistry looks normal.
    • Useful consequences: The Rydberg (bond) energy ER∝meα2E_R \propto m_e \alpha^2 rises ≈ +5%. Covalent bonds, protein folds, and polymer backbones become modestly stronger and more thermally robust.
    • Why this helps “powers”:
      • Biology ceiling rises: Tendons, collagen, and bone can tolerate higher stress → plausible 2–3× strength/speed outliers with extreme training or rare genotypes.
      • Materials upgrade: Everyday advanced materials (graphene/BN fibers, carbyne-like chains, high-modulus polymers) realize noticeably higher tensile strengths → credible grappling lines, lightweight armor, and compact mechanical exosuits.
  2. Nudge light-quark/nuclear parameters within the safe window
    • Change: tiny (≈ 0.5–1%) adjustment to the QCD scale ΛQCD\Lambda_{\mathrm{QCD}} and/or light quark mass ratio mu/mdm_u/m_d.
    • Effect on everyday physics: Keep deuteron bound, triple-alpha resonance near its usual energy, and big-bang nucleosynthesis yields within familiar ranges (so stars/chemistry remain normal).
    • Useful consequences: Slightly improved binding and lattice stability for a handful of boron–carbon–nitride compounds; one or two additional long-lived isotopes become marginally easier to synthesize.
    • Why this helps “powers”: Industry can mass-produce super-fibers and ultra-hard ceramics with reliability (not exotic unobtainium, just “version 1.2” of what we already chase). This enables glide wings, shock-resistant suits, and impact-tolerant vehicles for tech-based heroes/villains.
  3. Tip the scales toward ambient superconductivity via phonon/electron coupling margins
    • Change: the α\alpha/mem_e tweak above raises bond stiffness and characteristic phonon energies; paired with the nuclear nudge, it moves a slice of cuprate/oxide and hydride-like materials over the threshold to robust near-room-temperature superconductivity (no new force added).
    • Effect on everyday physics: The grid still works; ordinary electronics behave the same.
    • Why this helps “powers”: Compact magnetic launchers, silent high-torque motors, levitation platforms (Meissner), and energy-dense inductive storage become backpack-scale and reliable → credible flight/“hover” rigs, force-amplifying suits, and nonlethal EM gadgets.
  4. Thermal noise and bioenergetics marginally improved (a knock-on from #1)
    • Change: with ~+5% bond energies, enzyme/ion-channel stability improves at human body temperatures.
    • Effect on everyday physics: Nothing you’d notice outside a lab.
    • Why this helps “powers”: Slightly higher mitochondrial throughput and axon reliability allow rare phenotypes to safely operate at higher power density and faster reaction times without heat injury—think elite outliers that edge into “super” territory.

What this enables (without breaking the world)​


  • “Street-level” superhumans (biological): plausibly 2–5× current human peaks in strength, acceleration, and impact tolerance (bones/tendons keep up), plus rapid wound healing at the high end of natural variance. No magic; just shifted tails of the distribution.
  • Tech-assisted flyers and energy users: ambient-temperature superconductors + stronger light materials → credible flight rigs, grapples, shields, and directed-energy tools at human-portable scales.
  • Rare mutants/genetic mosaics: the physics allows biology to exploit inherently stronger materials/tissues. A handful of people can express multi-trait “stacked” phenotypes that would otherwise be self-limiting in our universe.

Why scientists wouldn’t cry “fine-tuned”​


  • The dimensionless pillars of nature (e.g., α\alpha) move by only a few percent and are compensated to keep primary observables (atomic sizes, star behavior) in family.
  • Astrophysics and chemistry remain comfortably within familiar regimes; any differences show up as “material miracles” after decades of R&D, not as glaring cosmic coincidences.
  • No extra long-range force, no broken symmetries in plain sight—just slightly friendlier margins for strong materials, superconductivity, and high-performance biochemistry.

If you want bolder comic-book powers​


You’ll need to step beyond “adjust existing constants” into new, ultra-short-range interactions (nm-scale) or metastable high-energy materials with safe triggers. Those can be hidden from lay experience, but they do add parameters that a very careful physicist might eventually notice. The scheme above stays conservative.




Bottom line:
Make a paired α\alpha/mem_e tweak (~±5%), a sub-percent nuclear-physics nudge, and ride the knock-on to stronger biomaterials and ambient superconductors. Everyday life stays “normal,” but the world now supports believable biological outliers and compact super-tech—enough for heroes and villains without advertising a contrived universe.
 
  • 2Mother of God
  • 1Like
Reactions: 2 users

Kharzette

Watcher of Overs
5,744
5,739
Have any of you guys made any local image classifiers?

I had this idea that sprang forth from a problem: I was chatting with a friend about how artists in the 80's leveraged the strangeness of CRT pixels to make better sprites, and I had the perfect image stored to show it off. I won't post it here because it was a pixelated butt.

To find it, I went to my pictures dir and scrolled forever till I found it. There are 1000 files in there.

So I thought, why not make a classifier that looks at every image in there, and spits out a guess as to what is in it. I could associate these guesses with the filename, and do some kind of command line program that searches, maybe listing the hits or opening a file manager type thing with the proper ones selected, or even firing off a viewer with a list of hits.

I searched around a bit and decided on resnet50. Found a decent bit of sample code to go with it: DeepLearningExamples/PyTorch/Classification/ConvNets/resnet50v1.5 at master · NVIDIA/DeepLearningExamples And grabbed the pretrained model.

And of course like anything python or machine learning, it didn't work. There were 3 or 4 formats of model on huggingface and none of them worked. Eventually I scattered prints throughout the python and figured out that the code was expecting model weights in the form layers.x.y and all the stuff I could download was layerx.y. The pretrained models also began counting from 1, so I had to do all this munging and re-arranging but eventually got it to work.

I could have done said re-arranging in c in like 5 minutes but it probably took me an hour in py. Here's it running on a random funny jpg I had:
Classy.png

Anyway, it works, somewhat. It's not supergood at anything anime or cartoony. Just tends to label it all "comic book" without distinguishing much of the image. But for a 100 meg model it isn't too bad.

I was wondering if you guys knew of anything better? I could probably handle up to around an 8 gig model.

One thing I discovered: I'm going to need a thesaurus. Any sort of cosplay girl (and I have a few in my pictures) came out with "maillot" as the top guess. I had to look up what that is.

So would it have helped me with my original problem? Not really. I would have searched for sprite, butt, crt, pixels something like that.

When classified for that image it was:
Code:
scoreboard: 31.0%
window screen: 7.4%
comic book: 7.2%
fire screen, fireguard: 3.1%
European fire salamander, Salamandra salamandra: 2.4%

So probably not
 
  • 1Like
Reactions: 1 user

Control

Silver Baronet of the Realm
4,531
12,517
Have any of you guys made any local image classifiers?

I had this idea that sprang forth from a problem: I was chatting with a friend about how artists in the 80's leveraged the strangeness of CRT pixels to make better sprites, and I had the perfect image stored to show it off. I won't post it here because it was a pixelated butt.

To find it, I went to my pictures dir and scrolled forever till I found it. There are 1000 files in there.

So I thought, why not make a classifier that looks at every image in there, and spits out a guess as to what is in it. I could associate these guesses with the filename, and do some kind of command line program that searches, maybe listing the hits or opening a file manager type thing with the proper ones selected, or even firing off a viewer with a list of hits.

I searched around a bit and decided on resnet50. Found a decent bit of sample code to go with it: DeepLearningExamples/PyTorch/Classification/ConvNets/resnet50v1.5 at master · NVIDIA/DeepLearningExamples And grabbed the pretrained model.

And of course like anything python or machine learning, it didn't work. There were 3 or 4 formats of model on huggingface and none of them worked. Eventually I scattered prints throughout the python and figured out that the code was expecting model weights in the form layers.x.y and all the stuff I could download was layerx.y. The pretrained models also began counting from 1, so I had to do all this munging and re-arranging but eventually got it to work.

I could have done said re-arranging in c in like 5 minutes but it probably took me an hour in py. Here's it running on a random funny jpg I had:
View attachment 598160
Anyway, it works, somewhat. It's not supergood at anything anime or cartoony. Just tends to label it all "comic book" without distinguishing much of the image. But for a 100 meg model it isn't too bad.

I was wondering if you guys knew of anything better? I could probably handle up to around an 8 gig model.

One thing I discovered: I'm going to need a thesaurus. Any sort of cosplay girl (and I have a few in my pictures) came out with "maillot" as the top guess. I had to look up what that is.

So would it have helped me with my original problem? Not really. I would have searched for sprite, butt, crt, pixels something like that.

When classified for that image it was:
Code:
scoreboard: 31.0%
window screen: 7.4%
comic book: 7.2%
fire screen, fireguard: 3.1%
European fire salamander, Salamandra salamandra: 2.4%

So probably not
No better options, but I would also like something like this, so if you manage to find or engineer something good, post it up!
(also, there are threads here that embrace pixel butts!)
 

Kharzette

Watcher of Overs
5,744
5,739
I think I found something better. GitHub - facebookresearch/MetaCLIP: ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Experts via Clustering

So for this one, you give it a few text phrases or tags or search terms or whatever, and an image, and it gives you the percentage match for each text item.

That's almost perfect for a "find me a picture of apples" in your own pics dir.

The downside is it takes around 7 seconds to run, and you can't really cache the results.

I'm really bad at python but I might have a go at having it run through a list of 10 or 20 images. Probably the loading / unloading of the model is where the bulk of that 7 seconds goes. If it can zip through images once loaded, this will be ideal.

On the above image I used earlier it gave megumin a 92% score. On this one it gives south park a 95%, pretty good!
VRChat_2024-07-08_21-01-07.322_1920x1080.png
 
  • 1Like
Reactions: 1 user

Kharzette

Watcher of Overs
5,744
5,739
I think I found something better. GitHub - facebookresearch/MetaCLIP: ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Experts via Clustering

So for this one, you give it a few text phrases or tags or search terms or whatever, and an image, and it gives you the percentage match for each text item.

That's almost perfect for a "find me a picture of apples" in your own pics dir.

The downside is it takes around 7 seconds to run, and you can't really cache the results.

I'm really bad at python but I might have a go at having it run through a list of 10 or 20 images. Probably the loading / unloading of the model is where the bulk of that 7 seconds goes. If it can zip through images once loaded, this will be ideal.

On the above image I used earlier it gave megumin a 92% score. On this one it gives south park a 95%, pretty good!
View attachment 598296
I tried a couple cosplays and it didn't recognize Frieren, but it did recognize Zelda fairly well around 50%. In both cases the cosplayers were semi naked. Hilariously a friend I showed it to knew one of the cosplayers was Helly Valentine so I put that in and it got 95%, so it recognized her.

I'm thinking with cosplay it will do better and better the more the outfits match, but I haven't tested it yet.
 

Kharzette

Watcher of Overs
5,744
5,739
Takes about 45 seconds to check a directory with about 1000 images. They really fly by so yea the model load is the big slowness.

Something leaks though because the virtual memory climbs to around 15 gigs. I have no idea how allocations and frees work in python but I'll have to figure that out.

If I recurse, there are 18k files in my pictures. I've got alot of art concepts, old AI gens of mine, and wallpapers. It really starts to chug when it hits gigantic 40 meg pngs. When I tried it, the machine nearly froze and I guess a watchdog process killed it (thankfully).

Right now it just spits out a percentage for each image. I'm going to need to return those percentages in a dictionary or something and keep track of the top 10 or something, instead of just printing them all out.
 

Kharzette

Watcher of Overs
5,744
5,739
Hmm so now that I have this almost in a practical state I see a big flaw. This isn't what I thought it was.

I was thinking this could do "What % confidence do you have that this image looks like an orange"

But it isn't really that, it is more like:

Given this list of text items, give the comparative percentage that you think the image looks like each. So given "tree" and "orange" it might say 20% tree and 80% orange on a picture of a car. Not super useful for searching.

Or if you give it just one item, it will say every image is 100% that item.

One way you might mitigate it is by feeding it alot of text items to compare against. Maybe come up with a catchall set of stuff like house, car, tree, cloud, but then you have to worry about overlap with the actual search parameters you care about.

Tricksy problem!