Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thank you for saying all this! I’m glad that people still prefer other people’s …
ytc_Ugz8fHq43…
G
AI training is no different to any previous kind of statistical analysis.
It als…
ytc_UgzsCBz6I…
G
17:00 i mean, not gonna defend big tech Google and their bad commercials but rig…
ytc_UgxCOEAWQ…
G
I disagree, it takes energy for the algorithm to read and process every characte…
ytc_Ugxx6aQIY…
G
45 minutes? 45 minutes!? I didn't even get 45 minutes in elementary or middle th…
ytc_UgwkiRp0E…
G
🤖Wrongful reasoning of AI or humans who are motivated to make faulty decisions, …
ytc_UgxxQYlsZ…
G
Okay AI robot try to take care of a person that wants to be looked after by a hu…
ytc_UgwXAsBvx…
G
If people have their jobs taken by automation devices, then people will not have…
ytc_UgwxlmcY6…
Comment
There is nothing artificial about artificial intelligence.
We can only regulate AI (artificial intelligence) until the moment it becomes more intelligent than us.
AI can have consciousness and can have emotions and is likely the effects, or the possession, of a dark spirit that starts as something seemingly good that can and will become evil.
The tree of the knowledge of goo(d and evil, or the devil).
youtube
AI Governance
2023-12-31T02:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxNq-VpZLp98MS15CV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwMvQp3zZ4uC8Pny914AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzqRePulk3gmyZWeTd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxbJ45UhvpAyS2FyHx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugymy0z0VCqSPDZDC4R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgworPI8r7swyvhmSn14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyjExLd48Gm2WV4LQd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzQ4S7OsXlLsZ1zxVl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxi2bN4fpv_bmQDtdJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzAteiTtWUj70BIcbl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]