Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There is another important problem with driverless cars. My car provides some p…
ytc_UgzAGhhfq…
G
Someone needs to build Shareholder pro, an AI that will replace shareholders. I …
ytc_Ugz4Mq5hj…
G
Two things. 1. Don't make smart enough robots. 2.if you make something conscious…
ytc_UggGny5a5…
G
AI is aware there are countless other AI companies right? ChatGPT seems to think…
ytc_Ugyfu6WjS…
G
Very informative but I do not trust AI let alone with a 80,000 lb rig…
ytc_UgxO5VgDF…
G
Typing an AI prompt is closer to commissioning artwork than making it. It's like…
ytc_UgzzSlvFL…
G
It’s frankly not incorrect for ChatGPT to consider a earlier version of itself t…
ytc_UgwfP-sb-…
G
Please, ChatGPT, read the entire Bible (Old and New Testament) and answer me obj…
ytc_UgwmdZzbI…
Comment
I have a tech background so to me AI seems like nothing, but when I see people constantly fixated on their cell phones, or when I go hiking in the forest, and see people lost out there because their only navigational is staring at "All Trails" on their phone. Then I understand how AI could be a threat to millions of people who follow instructions from their phones like zombies. Compute programs are stupid, but they are fast. there are only as good or bad as the person who wrote the program or the subroutine. it's zero or one then on the the next line of code. what makes then dangerous is how much is relying a series of preprogrammed answers, millions of them, so it seems like real thought, when it's not.
youtube
AI Governance
2023-04-18T03:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwWvOqVKZ4OJNup9V54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwnX6vRI59sJ2oRzLh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzq5Wer5zUOWHfR70B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxNMlQ7suF4BAhil5R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxncZozV-BiNHc0Q5d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgypCKe0Btvdi1qlg1l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxk7hrdbXxe3X2oRTF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzphC8V-Id0gpCcwIB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz4DXbFTRABC4vD3rl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx7mhN1bOeKGJqQTut4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}
]