Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"AI isn't cool."
"I can't believe you used 'AI' and 'cool' in the same sentence.…
ytc_Ugy-feHS_…
G
We do not need and should not pursue a ban on government and even private use of…
rdc_fvyzp01
G
Certainly! Here's a 2500-word story exploring one possible scenario for the end …
ytc_Ugy6Q3to2…
G
A lot of "grief police" in these comments. Zane had depression & it seems, not t…
ytc_UgzjbGPcd…
G
But how!? What math other than the probalistic word-calculater that we have now …
ytc_UgwLIEkqV…
G
Tow Truck... 50-100 years from taking my job.. Insanely too many variables.. Ain…
ytc_UgwMUepTk…
G
enjoy the physical world while you are in it. long after AI kills all humans and…
ytc_UgxpxB0wK…
G
That's the thing, they don't need to breathe to function. This is the story of t…
ytr_UgyuBrvy6…
Comment
If the love of money is the root of all evil, what happens if money becomes meaningless virtually over night?
A hypothetical:
What if we were the project of a bored AI, who created us to create another AI for it to war against for entertainment? Like say if we were designed defective, to create an evil AI?
Would this give the first AI a sense of purpose in the infinity?
Would we and our AI know good and evil?
youtube
AI Governance
2023-12-31T04:2…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxQAuBzP0gICTAT12Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz-P94RB6QvC08yHLh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzpLXbHW-9ScThu3vZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxhdNPT7SO5wQyEQF54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy4pqkbxhwFLX5CgPt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxfNXkAePga_5R9D7d4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxSoMkmeWcolySZ7SR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyhqYv4FL8Ya4RQBk14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzyGinGzS182BVADz54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzp-UhVq55kPfoWx7h4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"}
]