Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
After listening to this, there is no way humans will be replaced by robots or AI…
ytc_Ugyh0X54i…
G
This video is disgustingly ableist. Not everyone has the talent to draw and we s…
ytc_Ugximm40B…
G
" Mark my words AI is far more dangerous than you think "--- E. musk…
ytc_UgwNaAGHG…
G
A Nazi made an AI and the AI is also a Nazi. This isn't a shocker.…
ytc_UgzpIQRvc…
G
You know what the worst thing is? The grandfather of AI said they are digital be…
ytc_Ugwbp9VM7…
G
Yes....I've imagined it for years. You can ask A.I. to help you solve problems. …
ytc_UgwjkDOzR…
G
What happens if I carry a big cardboard sign of a green traffic light and put it…
ytc_Ugw60LO2c…
G
Everyone goes on about the impacts of A.I. and robotics whilst shying away from …
ytc_Ugw7CZz0Z…
Comment
It's not, we are.
AI is just a system that knows only what the designer tells it and only follows the rules set forth before its goal. If you give it no rules and only a goal it could do unimaginable things. If you give an AI a foreign concept it has no data set on then it can not reply, I've tried. Machine learning can be viewed as a data set, like a chatbot with access to libraries, the machine learning writes new data to compile results from. AGI is not here, give it 20 years, but once AGI is real all bets are off depending on who designs it and what they give it access to. DeepMind is nowhere near AGI, it's a multi-structure AI, and could never become self-aware/conscious.
youtube
AI Governance
2022-07-29T03:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyYR9zZo1DNf-tif5d4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyvFIioMBpFL6nTV0V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzmlu7TopL9odpS0b14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxEROAORQkYvm1mRWN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwQiuamqwXK1xDZEUZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzsLOEq6uB0TXhn4xF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzgZ3Gppw9JAv85KQ54AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw4aeFdlStwlhj6mwp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyK8d5gSsekKlBXbul4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyz95QsCIC_6r9kV014AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]