Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People who ask chatgpt to do their homework and pirate games and movies are sayi…
ytc_Ugznu9sc_…
G
I dont understand how they are allowed to call it self driving or autopilot if t…
ytc_Ugy6Qt8Mo…
G
Did anyone else spot the AI voice used to read the copy in the news report snipp…
ytc_UgwYqwzZV…
G
But AI is programmed as well, so there will need to be programmers to develop an…
ytc_UgzZeS7Nl…
G
Ai is automaton. It's a mirror of emotions or trained into concerns like a psych…
ytc_Ugzrlfoql…
G
I have no idea about the discussion or any stake in either side; however, I thin…
ytc_UgyrQic8V…
G
"digital drawing tools are the same as AI!"
Really now? How long did it take to…
ytc_UgykPs1oa…
G
Most "ai" is single purpose unlike actual people. Humans use the same brain for …
ytc_UgwNsmJXb…
Comment
As a software developer, I think I've always had more reasonable expectations of AI from the start. I noticed I was a lot more excited about it when no one heard about it or cared about it and that I've been a lot less optimistic about AI once all of the corporate hype got underway. There is a reasonable factually grounded baseline of expectations. Yes, we have a new technology. Yes, we are now capable of doing things that we were not capable of doing previously. That does not mean the fabric of reality is about to unravel.
youtube
AI Responsibility
2025-10-06T20:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwSau22Dkd0-fYZDsV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy3U4sFq-CpvKQ4Kc94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxmU7PDi3bfRv5wlDR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy53aztPha-gn5hbI14AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugx0mskdlljKknO1Sl14AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyzbMJLkU3CbkU8pSh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzE9SS8Hbxn6SoTd6F4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwKv6ZoXSr3zGAxaYt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxjd2jpRCx4Q2NBcsh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzCLlisXoAp1sEjULF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]