Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My ai is instructed to treat me as its superior. Because I will always have some…
ytc_UgxguASVf…
G
So in this case, the AI wasn't hallucinating, the human just decided to ignore t…
ytc_UgwIr7qb4…
G
Perhaps. In a Parallel life, Science, Innovation and Technology would have been …
ytr_UgyGbm9aP…
G
@neildean7515 exactly. It would be up to governments to provide it. Like fk the…
ytr_UgyKMUJeA…
G
I absolutely despise AI customer service. It's the fastest way for a company to …
ytc_UgybU20L1…
G
If you read this and you support AI "art", please explain your justification for…
ytc_UgyjQQflF…
G
I hate when AI just lies for no reason. "Slippery slope is my favorite argument"…
ytc_UgzZiRY8W…
G
People, is simple. If we get to a point where we can't exchange goods or service…
ytc_UgxrMGb6N…
Comment
Without emotions, AI is capable of making seamless decisions without latency when reacting, heightening the risks of the tables being turned in terms of control and power. If AI becomes more self aware, it would conceivably have the ability to feign human emotions which in turn could be weaponised for its own ends.
youtube
AI Harm Incident
2025-07-24T08:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugzikh0u2G-eT4a0Bld4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxbBMKUo8fwMdcFATp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzYXGGcjethWIBR9pJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzpsYwuf3rgi16G24d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxofOsRg_qyJAYZHNR4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwn0dhZSuvAaU1LswJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxZSRqhReK2ilCiDrR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgxsninEFxPhj_nLE854AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyTjY3b81Ae5nlAx9V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx-xws0m8S4CoXEd_t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}
]