Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The only bad thing about It its that, All that artwork is used to feed the ai if…
ytc_Ugx2TNc5A…
G
There’s a seductive myth making the rounds—that artificial intelligence will hel…
ytc_UgyMXSGhH…
G
Great video ... very helpful to enhance my on AI and how AI plays in our daily l…
ytc_UgwTEUwaH…
G
Pointless discussion. AI will continue. If you don't want your art seen by AI th…
ytc_UgxWZnbPl…
G
With these puppets western single use plastic little presidents - WHO needs to s…
ytc_UgzaBeBNS…
G
Wait, so how can AI know what we want when we don't even know what we want the f…
ytc_UgwwT1bvb…
G
That is weird I want to be a npc, and not live forever 😅 If more I listen videos…
ytc_UgwtuH8Im…
G
@mimimoomoo2902 What does that have to do with whether AI is theft?
But fine,…
ytr_Ugxu7mJnV…
Comment
In the tests, would the AI threaten to spread lies about the executive in an attempt at self-preservation or did it only threaten to reveal actual truths that it discovered? There is a big difference; if it know how to lie for its own gain, that is a HUGE red flag.
youtube
AI Harm Incident
2025-09-01T13:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzTlHVp6Q1BsgGRy-B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwmLWg9YPbGOO7Gh7F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz4RLdbZZZvm8RFfvN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxQeA4pGo_PtPElS-V4AaABAg","responsibility":"intellectual","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwaVPCGlxZnuvwdE6B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz4X-1N4XIk-JYCSQ14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwCYafao9N1i7qyhQ94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_Ugz92bairmfuiRE9NZp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz31T1cUq1ePVO9Avh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwcY8__jhFEoOW1x9F4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]