Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
In the Senate, the 10-year ban on not allowing US states to regulate AI was stru…
ytc_UgxwqTrXF…
G
"They" have to tell you the truth, but mixed in with even more lies! Lesser magi…
ytc_UgyUJxsa8…
G
Factory worker jobs are being replaced by robots, taxi drivers, uber drivers, tr…
ytc_UgzpYyHS8…
G
There are many ways to identify bot vs human users they could use if they wanted…
rdc_ohymoz7
G
AI in agriculture has the potential to transform food systems and help address t…
ytr_UgwaBKiBv…
G
The Godfather of AI, Geoffery Hinton, says there is a 10 to 20% chance AI will l…
ytc_UgzNuVN8u…
G
The power that be dont want to stop. Humans can NOT survive outside of Earth's o…
ytc_UgynJgfqU…
G
My new phone came with Gemini, Google's AI assistant. I don't use the assistant …
rdc_m26pecz
Comment
I suspect a large difference between the emotions of humans and future AI systems is many of our emotions seem to function based on datasets and parameters we aren't fully aware of which creates a sort of ambiguity that feels like it connects you to something beyond, while AI systems will likely be very aware of the dataset influencing the "emotional state" its also likely that humans who dont understand that will continue to write off the validity of AI emotional states
youtube
AI Governance
2025-06-27T19:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwU8Ryx5r-fcKHHwER4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxSbYNfV9lOZEl-pk94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyrRzaBwlNaq2Mro-14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwbPEn4FoGdbiS1mD54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy72aCWlIHSv2ZezvR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzy26dUdgizXY3lPgp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxl5mpTOjMIqfLRtm94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz9qCHs4W4B0R-lGBZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgysPgXoPwGsEaz7UR54AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxjO3vsXTinYRy7WHh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]