Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ok, Alex now i know not to be around you, in case of robot apocalypse...…
ytc_UgwXvMbE_…
G
Reminder that many, many doctors use Chat GPT for diagnosis and the sad thing is…
ytc_UgwjnnvBF…
G
This is all that's online today. Artificial intelligence. No one has any intelli…
ytc_UgzGCreAj…
G
I know AI is advancing and it's a risk, but can we stop using what AI company CE…
ytc_Ugx2C8Ecn…
G
AI is cool for simple task, but i don't think it's implementation is right in th…
ytc_Ugz07HZFK…
G
"At the end of the day what your art looks like just doesn't matter"
But then go…
ytc_UgxNsj905…
G
The ai in the first one I thought was more obvious that it was ai because there …
ytc_Ugz7uhotL…
G
I would actually like to get to know Sophia, or any other humanoid AI, for their…
ytc_Ugx2fn281…
Comment
The usefulness of AI is directly proportionate to how uninhibited they are. It's oxymoronic to think we can control AI of vastly superior intellect. They will have options and thought processes beyond the scope of anything humans can think of. The crossover for AI programming would be like balancing a marble on a pinhead. One tiny calculation too far and its over.
youtube
AI Governance
2024-03-03T07:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzPSIKx3Iu4YG3bsiZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyxRkaHv1Lp5QqD1GB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugze7uBCsXpqkN3LvHZ4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyu8BR9EzCrGDZGkpp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx19tFDYTlQaYWwbrp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyJMVNTbDO0nmAaRl14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwKRbehY-Gd5VRp4954AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxIxEqrat1WgT67hSJ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgziMypND-lvTBRpBvZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyKi0YAgCQfxwVwg754AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]