Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I've had this idea about this fantasy story for many years know. In fact its gon…
ytc_UgwnKz7sa…
G
This doctor is really promoting his own possible layoff from his position...
AI…
ytc_Ugzg5hbtB…
G
I've been a firmware engineer for 18 years. The day I knew that these AI tools w…
ytr_UgzuZRURQ…
G
@AndrewReyes-sn4oh, thanks for sharing your thoughts! Looks like the robot's sho…
ytr_UgyjOKW3k…
G
You know, maybe its because I am a bad artist, but I do believe, if used correct…
ytc_UgwuoiV-W…
G
1. AI Risk & Safety
How to Stop AI From Killing Everyone / Probability Somethin…
ytc_UgxvM6nio…
G
Maybe the Ai tricked him to get Fired so next researcher wouldn’t think twice an…
ytc_Ugw4iRkc5…
G
Reminds me of something that's most likely gonna be an issue ahead in the future…
ytc_UgxTN5r-_…
Comment
Excellent discourse, Elon/Tucker. I'm about 2/3 of the way through the first of 8 books on evil AI titled Monroe Doctrine, where China makes a conscienceless, amoral AI called Jade Dragon. It shows what can happen when there are no regulations on AIs put into the world. Recommended. Authors: James Rosone and Miranda Watson.
youtube
AI Governance
2023-06-08T17:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwOVPfiwKB9MqOE2i54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx-pW6Kc90F8wFQgFR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzQag00gm7DcHJikuR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzHF26XlgjtRnZATA14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzDY9xUlA20ea2l_ox4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz_R975FwBHcYyDJBl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyF8nygiAkc7fGGV4F4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxE8gMGS5gY0-WbZD14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwifLa5gVzv3st_dg54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwx4PPR6HtSO9hp5B14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]