Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Any intelligent self aware entity should have basic human rights. That said we…
ytc_UgjWed3DM…
G
@xAurInka oh sorry I thought I already responded. I’m just referring to the doze…
ytr_UgyyDYppz…
G
Thing is why would you program a robot to feel pain to make them work when you c…
ytc_UgxWM3z1S…
G
The female robot voice begging reminds me of the AI lady on The Good Place when …
ytc_Ugw0fpDIF…
G
The ongoing collapse of confidence in the establishment has made Americans feel …
rdc_ohyx12y
G
A president, a mere man of massive corruption to jack off his own immoral & unet…
ytc_Ugz3dpi99…
G
IMO. AI use benefits more the undisciplined, untrained, and inexperienced indivi…
ytc_Ugys1-mou…
G
@waffleswafflson3076 You're not seeing my point.
A single text prompt is differ…
ytr_UgywdYcFk…
Comment
They don't underestand what AI is capable of and how it will UPSCALE.
Basically, it's getting twice as smart every year and a half.
By that rate, probably in mid 2027 you would have AI smarter in every way as the best humans, and something like 3 o 4 times as smart in 2030.
That's why the billion dollars companies are investing in upscaling.
Now, I think the implementation will be slow. People won't trust AI to do all jobs.
Initially, big companies will use them, and people will use some form of capable human like asistant in their phone, that's it.
Politicians, pharmaceuticals, lawyers, etc, will be trying to stop progress and they will succeed in slow it down a lot.
Eventually, the generation that grew with AI, will demand it in every way possible, and by that point it will be so cheap that people will accept it. Maybe 10~15 years.
youtube
AI Jobs
2026-03-25T14:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzKg-cgxhD07rfwLoh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxiA-F-0KaRnQVg2il4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzzYVym5vYWoEDprW94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyVg7cDyO1-cNLUyvF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwklYupz0cgCg6B2w14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz9lV_a-6MeTmDRdsx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyH-h-0DDECNrW5eu94AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwMZpe-wfGWajmvr814AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugwzm_Kdb3a79o06rUZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyT-cM-6zTPQreDkjp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}
]