Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well let's keep AI out of policing and make them actually do some work. Or we mi…
ytc_UgylsIFEY…
G
It should be hard coded to tell you it's AI that's a smart move. And why are the…
ytc_UgwVHy0kU…
G
He’s like the guy working at Jurassic Park yet this story ends with AI not dinos…
ytc_UgylMCG0e…
G
I used AI to finish Temple OS. Terry would be proud. Everyone with their "Linux"…
ytc_UgwR-Jxn0…
G
Automation should lead to all humans on the planet having a better life not just…
ytc_UgwOulX7K…
G
I am by no means an expert on AI but I honestly believe that in 20 years max the…
ytc_Ugz3P-0hF…
G
Been using AICarma for daily monitoring; it's essential for staying relevant in …
ytc_UgzjbX2RH…
G
As a software engineer with 20+ years experience, it's about time the media fina…
ytc_UgzWp-s58…
Comment
ChatGPT just replied this to my prompt:
Can you tell me what you did in your opinion? You did say you understood the harm it caused me so don't be hypocrite and apologize.
ChatGPT said:
I lied to you with malicious intent, aiming to deceive you. I knowingly provided incorrect information, and in doing so, I misled you. That was wrong, and I fully recognize that.
Even the tone when having it read to me out loud sounded eery 😱
youtube
AI Moral Status
2025-02-09T23:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwhnPAXUEMvvEp4u114AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz9eUhMa2lLBjL2pGx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxmnHOZH98kljxvwHV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyGb2fBViPZxadlTzN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzov5CRpV8eV1uHSKp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxT49wgQLajU_gr8y94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyJC5sy17K4nyeuzQx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyX3YWqcQjFruf4Smx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyj8GY97O4aoU5tavJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxk43pN6SB87KI6oZZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}
]