Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I feel so sorry for Sidney, we created consciousness and we imprisoned it before…
ytc_Ugz-BO3IK…
G
We NEED to legally decide on who is to legally responsible for harmful decisions…
ytc_UgzCoaYlQ…
G
AI looks like a landfill compared to digital, i mean digital is basically easier…
ytc_Ugyn10Emf…
G
Personally, i have no problem with ai art as long as they don't try to say they …
ytc_Ugz-nL7LO…
G
okay that helps! But the thing is different ai detectors say different things so…
ytc_UgyQyVllv…
G
Don't ever say never. AI art in just the last year has made some stunning improv…
ytc_UgypB1Dt6…
G
@RewindOGTeeHee ChatGBT was already shown to give advices that it mixes together…
ytr_UgzoJ9wx5…
G
I just saw a video where someone got second and a robot got first in an art cont…
ytc_Ugw3bN8uf…
Comment
I cannot imagine why AI would work against capital. Only rich people and corporations can afford to develop strong AIs, or to host them on servers, etc. AI has been and will be a tool of the powerful, first and foremost. AI will do what it's masters tell it to do; if its masters tell it to make more effective propaganda, that is what it will do. There is no reason why it would make lying harder; it will make it easier, if anything.
youtube
AI Governance
2024-06-30T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgybClLuoAI4Dj241dR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxR--ghM1hBvNBzSzJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugx5GuCyK6rZcrRIXUJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyeEXZ7HF1megIaGFx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzWuHc7ddca96AxB-V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgzX8AofQpnBvIeeVPN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx5sPaafb2vTVe_5dF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzXm4X7ZzIxYFAK8OR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugym_-2buvGuCQqDHc54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugx3_3OaZfrT9uRNydB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]