Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
All robots are programmed by humans. The humans that program these robots have a…
ytc_Ugw_HtOKu…
G
Is very clear a worldwide computerized intelligence will continue gathering info…
ytc_UgznXMwzD…
G
Thank you for continuing to educate people about AI and the things we can do as …
ytc_UgxDpbRc_…
G
Self driving cars are so far off. Elon is a con man. Do not trust self driving t…
ytc_UgyRSVtvY…
G
I'm gonna use chatgpt to tell me what's important from this and save me 3 hours.…
ytc_UgwwyX5L3…
G
The thing is that a huge amount of jobs could have been already replaced 15 year…
ytc_UgzfeO1A4…
G
This is simply not a complete true, AI do the job. In fact my productivity has b…
ytc_UgybbEbTm…
G
You can’t trust ai when Elon can twist his to say whatever he wants it to. The…
ytc_UgwBw22Nc…
Comment
Altman is mesmerized by his own creation and he will be devoured by it. And with him, all of us. There should be strict regulations about the isolation of AI and forbidding the Super AI. So strict the jail time is not on the list. This is an existential issue for humanity. How would you punish those who ignore it? Yeah ! Exactly !
youtube
2025-12-26T21:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyhT2TveuKHcKptHpB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxIm6xdmQINTss--mZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzGNPskq5k9Vd-Kx1F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxukSy3Pyee2ndCkpN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyN1sG9V8Fjm1ToZl14AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzgMKXNJY6N0FgDlvp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzSSjOWXHlhISyJ63R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyreEcaLDUdr_DzTmt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyUXhcVPlb3CflDPoZ4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxEywbfTSasnjtmyFt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]