Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My chatgpt the moment I sounded down because I had mentioned I was depressed. I…
ytc_UgyPlCHka…
G
Ashneer ko sach me nhe pata ki ai kya bwal cheeze hai mobile and internet bol nh…
ytc_Ugyoks4iM…
G
In other words: the European countries have many interesting citizens and compan…
ytc_UgzdmRaqV…
G
That robot was actually a dude I seen the real fight they replace his opponent w…
ytc_UgwEoFl5P…
G
If you do not use AI to the discretion of Human Intelligence and needs, then it…
ytc_UgzwUeXwP…
G
bro really got gaslit by an AI and then it walked back its gaslighting when he p…
ytc_UgzlfMQ4X…
G
Why pay someone to do a job well when you can get a robot to do it badly for fre…
ytc_Ugy7mVMPB…
G
Don’t trust it. Will never use Waymo or any another driverless vehicle. Would…
ytc_UgzpLIIRe…
Comment
This was far more than a discussion about which jobs will remain by 2030.
What struck me most is that this is really about the redefinition of the human role in the age of AI.
As AI takes on more large-scale cognitive labor, the value of human beings may shift upward — toward setting principles, choosing direction, taking responsibility, and designing civilization itself.
So the deeper question is not only “Which jobs will survive?”
It is also:
“What kind of humans will be fit to lead the next era?”
This was a thought-provoking and deeply important perspective.
My sincere thanks to Dr. Roman Yampolskiy, to the interviewer and everyone involved in producing and sharing this video, and to all those contributing to these vital conversations about the future of humanity and AI.
youtube
AI Governance
2026-03-15T11:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzo6AV3Kl1h5aPEhBh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy9SAkMZv3p0Kd-BKt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxVnRo3Yoytx5j6vaV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzuD1_0Oz4YCTCPgtR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxkcmxQBz6aK5RKzbB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz-pr7SgeGSzEg1xUl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx0a3jpvJ-cBATNXu94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwLTPC16ukotAheqwB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyGhg4uhMt69chFKhZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzr-2Mu4tOu70Gizu54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]