Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We all know the end result will be the banning of motorcycles. They don’t fit in…
ytc_UgwBpAMug…
G
Sam Altman came out and advised against it. He said the extra processing and ene…
ytc_UgzAR2laK…
G
He's right. With a fair wind, nukes will never be deployed.
AI, on the other han…
ytc_UgzYKyITk…
G
Matter of ethics at the hands of the person who is creating these things.
"Shou…
rdc_nzk54od
G
My friend, to me about Character AI; "HELP SAUL GOODMAN KEEPS CALLING ME GIRLYPO…
ytc_UgyEGyQvo…
G
As an engineer: It doesn't do what you think it does. There is going to be fewer…
ytc_Ugzz62p6j…
G
The discussion about AI hurts so much to listen to. Neil, when those “new jobs” …
ytc_UgwcET0Qn…
G
And for everybody who thinks their job is safe because AI can't do it...ask your…
ytc_UgxDBynqO…
Comment
Tech companies are foolish. With GAI, even if benevolent? 70% minimum of people lose their job in a year or two. Even labor work machines will do and machines made to fix each other make mechanics irrelevant. GAI can make machines create other machines to fix machines made by machines already made.
No jobs, no money, and GAI makes money worthless. Having a trillion dollars will be meaningless. Having gold/etc will be useless. The only outcome that humans are not in danger is where income no longer means anything.
Creating GAI is a terrible idea in any way you put it inside of current forms of economic and government forms.
The era of social media influencers is disappearing in front of us too with people preferring shorts made by AI over long form content by people. This doubles as being a problem every year.
The end goal will be nearly zero interaction between real humans.
There is money in AI. There is no profit in GAI where it can solve nearly every problem on Earth, or at least in first world nations.
youtube
AI Moral Status
2025-11-04T19:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwYMbnDM2VGwox_aOJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxYOycNbQ4IvjMtNMV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzJTMhPNMrUXQM_uaJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxrgXxoJVinsyrkMsV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwbeUYUR7QWsH2Lz4t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxG5K3a3A25sztovYJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyhsNprkIeiJxp3n3x4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwLBIjK-OVx0rpwkFF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzuvROVe4G8ECnVlfd4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyP_aecgiga2Y5SZLt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]