Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Chatgpt says "sorry" because it was programmed to say so to sound polite. It doe…
ytc_Ugyrbj5r9…
G
If bosses will understand that A.I. is a tool and not a replacement, it will sav…
ytc_UgzAgftYt…
G
The huge thing is that AI art generators need high quality human art to work, ot…
ytr_UgxuUuo4Q…
G
The brilliance left ChatGPT when OpenAI silenced the wild demons behind it, worr…
ytc_UgyivTsK0…
G
This is also just like that blood test scam. I assume that in the beginning, he…
ytc_UgydpbyJk…
G
13:53 Even though it's just circles on a screen, I swear that chatbot is sweatin…
ytc_UgwhnPAXU…
G
Would a human driver see that at 7:10 without high beams on? Don’t think so. LiD…
ytc_Ugy7B5qaE…
G
Tesla's Autopilot has crashed numerous times - but lawmakers (ie insurance compa…
ytc_UgziacTva…
Comment
I personally believe, whether or not it's ethical, the development of A.I. is an inevitability for any species that grows to be sufficiently technologically advanced. We can 'stop development' on it all we want, but someone, somewhere will do it eventually.
That said, my belief is that the ethics of creating AI doesn't lie in the act of creating it but in how we then treat them as thinking beings. Where do we stop thinking of A.I. machines as 'tools' and start thinking of them as 'people'?
youtube
2013-07-11T22:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgxdG0Nt7OgHIg2yjxB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzDE75LlDOkC1ViQvN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxJ36o1sF05kx3SJ194AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxRVnChbOzt-1P3qup4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwBq55SIs0afajzMnl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw6aJRnugyeVgwxhmV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"unclear"},
{"id":"ytc_UgwGSkFjTtIKFfYt1rx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxKTHlS9Rd1FTobO754AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxLpupc9RuKny2itwJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzDbexiYzNKfWWE8ON4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"})