Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What are economies for? The novelist’s “nationalization” argument is interesting…
ytc_UgyxXajnF…
G
“ai can do things you never thought possible!” it can literally only do things f…
ytc_Ugw4rI2FK…
G
This feels like a good ending to a 4-year long movie. It’s always good guys who …
rdc_gbhqjsy
G
I am scared of him. Adding to all other words I feel like the new order is that …
ytc_Ugwg7AG2M…
G
There might be a way for the ai to get around this by not training anymore and u…
ytc_UgzOOyZrs…
G
Questions to ask yourself to upgrade your feelings (which is your internal tool)…
ytc_UgwQ9uhIP…
G
There are a large majority of police cars in America equipped with facial recogn…
rdc_mzjnte2
G
its a jailbroken AI model, we feature the process in our most recent upload, you…
ytr_UgwOYy9uJ…
Comment
Aren't companies like Anthropic publishing stuff about how AI went rogue in tests, etc done on purpose to make their products sound better than they actually are to investors? I don't agree with Nate just brushing off the Sora AI slop as being separate from what they're actually doing, I think it's giving the AI companies too much credit. I don't think this current version of AI is anywhere close to the whole superintelligence thing and it's just a big distraction from the real harm it is doing right now. I agree with what you said at the end there, but I'm not sure I like living in interesting times...
youtube
AI Moral Status
2025-10-31T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwUXNN0BH9UGFe3AIR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzRmfkOp6bO0nb9UXx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyi3RCOeht4txJNWBB4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzM2FPyCXlq3ddCGYd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxD4DvwO2UxlJlS6114AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz6Pt_A9K6iBockeqF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzOjPJrQfssCdRpZDd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzQk-TwitKTFePsIm54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugypezwk4B0M5UuE24V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyty4n1d7bq8r3t-k14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"}
]