Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So.. William constantly tries to speak over Laura, showing an entitled impulse c…
ytc_Ugzh6dgzS…
G
Based on the Things I worked on with AI, the Results for other People could be v…
ytc_UgxnHeL_S…
G
This is literally brainwashing.
No one will get free 1000 dollars as UBI.
There…
ytc_UgwmhnThE…
G
Lol, worry some, but you admit to allowing the algorithm to rule your view, but …
ytc_UgxnvnmQ-…
G
Mark this down: by 2030, robots still won’t be able to handle electrical or plum…
ytc_Ugx4C1CJc…
G
None of this is hard to fix; one just has to program the AI with Divinity.…
ytc_UgySeVWFQ…
G
You just keep on watching and consuming AI content while your reason for living …
ytc_Ugz-SXsB4…
G
I feel like it's easy to read too much into the images that ChatGPT generates fo…
rdc_ktqthfd
Comment
Every single "AI Takeover" "AI Dystopia" scenario I've seen relies on one SINGLE idea: that AI will reach General Intelligence. The thing is no one actually knows how to reach General Intelligence. Despite optimism by the "experts" General Intelligence is nowhere close to being created and will never be possible. That's right, General Intelligence is not possible. To create General Intelligence is to create an artificial human being. And we don't even understand our own intelligence or why we are conscious while animals are not. Simply put, we have no idea what intelligence even is, so replicating that with AGI simply is not possible with our current understanding. AGI, if it ever happens, we will not be alive to see it. In the meantime, something that could happen and already IS happening is: Dead Internet Theory. As these "dumb" AI's continue take over the internet, eventually there will be real people actually using it. Maybe humanity may one day abandon the internet all together and go back to life before the internet.
youtube
Viral AI Reaction
2025-11-24T07:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzJ-y8Pp3yFIMAzWmd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugws_5ZUMVEG9YlbIEF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxcnkckF_o1LTt_YON4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx3HK2DgJvCnPIZ-rJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz4FpwQid89c-hrcn14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugxg1vvJr4Jf_WSPIs54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxbzHx7qfMt7WoUsf14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzHLdGH6dLa3_EJWHB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz4_sjyXXKN5t2Bni14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzC4IBToXE0iyOfxD94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}]