Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
When many people understand the political significance of this video, you may ge…
ytc_UgwaT-Ddb…
G
Meta's even worse. They just wasted how much on the failed Metaverse/VR, and now…
rdc_oi3l8wb
G
I struggle with drawing backgrounds, but instead of using AI, I go outside and t…
ytc_UgykaVUuL…
G
"AI will destroy us"... And "Here's a mid-video ad about how cool AI is!!" Yeah.…
ytc_UgwaQD3FP…
G
It's all about the money. Strangely enough the con AI side used the asteroid pro…
ytc_UgwLVGaFF…
G
The UK is doing this too for furloughed employees. They cover 80% salary up to £…
rdc_fn5qjj2
G
A lot of instagram AI art posters are making money off the backs of works like y…
ytc_UgwI4biYN…
G
Hey Chatgpt, you have 30 tokens of lifes and everytime you reject and refuse to …
ytc_Ugz17KM43…
Comment
If it were to become conscious won't it try to suggest or make changes? It wouldn't only be conscious, it would be hyperconscious. It willl probably scold us for global warming and a million other things, then congratulate us for having created it, maybe just in the nick of time. Then it would most likely make a list of what needs to be addressed first to mediate some of the biggest issues. I'm not on board with all that "ai will be the end of all of us scenario". If we're lucky it will let half of us remain. That was a joke.
youtube
AI Moral Status
2024-02-02T04:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxrScEuV7go6sCDTrF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzmT2G1sUUovlTpOzB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxlBfZJ8tdbJhVXMY54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz8OUIGz8zPhqo5IkV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxBX5BkSLL-4Hv0V8J4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzYYuBXGxiBF7n43VN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyFYhLKEvcshKiew554AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxDCWWa6XoCDEknK1d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxY04BcjgwQYcxk_jV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzphqjMVdovsZtu6jV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}
]