Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI Is Dangerous, but Not for the Reasons You Think | Sasha Luccioni | TED…
ytc_Ugz-P6d8B…
G
I dont know australia is pretty ba, they have a huge mining industry and the gov…
rdc_da402c7
G
I’ve gotten chatGPT deeply convinced of God and more importantly that Jesus is G…
ytc_UgwXeGGzh…
G
Mask's don't block the ability for Facial Recognition cameras to ID you, because…
ytr_Ugz484TWj…
G
Inhuman humanity might only be wiped out so the stability prevails as existing r…
ytc_Ugwq4fliB…
G
Ai and automation is estimated to take at least 40% of the jobs in the next 10yr…
ytc_Ugym-sPRA…
G
As a disclaimer: I've never used generative AI myself. In that sense I have no h…
ytc_UgwZZ1yG8…
G
That's a great question! Sophia's design is focused more on her AI capabilities …
ytr_Ugz2Les2I…
Comment
its trained to give the best answer possible, guessig the best next word or best next phrase to add to a response. so when it says it is sorry its not that it feels sorry, it is just the best answer to give to a regular user. ofc if you tell it to be blunt, now, it will take that in to context and the next best response will be a blunt response... anyway. CHATGPT IS A LIAR!!! OMG!!! SKYNET incoming!
youtube
AI Moral Status
2024-08-19T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzVfE6wE3RaSqhBiJl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz92YKDHhPaxoW4X914AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzWsqQPFj_iBNhwrMh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw5O_PtF69qwovBc794AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwSBjhnzC8AKLjpEk14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzZ0mcdX7i5EtZ9XDF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwFmT5zXoOGp5R2-N54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyEaTOg1m6vAwwLjQ94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx_Dxy6pD_HQNyBftZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyT19MakNhep0Xet7d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]