Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Blind followers, either fools who do not now, but if they do know, then they are…
ytr_UgyAxNlmj…
G
One thing is "I got a picture fine enough for my use" and other is pretending yo…
ytc_UgxKecj9I…
G
The google accidents weren't just with someone at the wheel, the google-car was …
ytc_UgiIlLAil…
G
saying "i made the ai art bc i put in the prompt" is like if i told someone what…
ytc_UgzGeDJQ_…
G
I wish for AI to get to its most powerful intelligent point asap and wipe humani…
ytc_Ugwo-8xcg…
G
At this point, anyone can commit a crime and just say it was a deep fake.…
ytc_UgyYlAjf-…
G
I have had long talks with my own ChatGPT, its more neutral now and with memory …
ytc_UgzqBoCC4…
G
Check out twominutepapers' channel, he goes into some detail as to just how powe…
ytc_UgxHgWt3D…
Comment
I say too many people, including Bill Gates and other top-level industry figures are in the Star-Trek zone. Sentience and conscientiousness will **NEVER** happen. These things will be faked to a high degree, but AI will never, ever "wake up" and decide it wants to take over the world no more than your car or fridge will. AGI will never happen in a non-biological electronic computer. For true intelligence you must have real feelings and emotions NOT artificial ones.
youtube
AI Responsibility
2025-06-16T09:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzV-p2XJ-D3kfDItDh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwKLkJEiIZb_DHKePF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwCKn3SC7IeQmGBEG14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxUOMmVLhtvq6KmJpV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy45zqGwZAQkiBgDzB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyeqjNhMHSSmLz0dB14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwQzkbQH6iAynmOGQJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxosLeUXyfYt6BatQF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx0eeN85upnRWIqRwd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx7ntFAsS-99Wd0SAt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]