Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Perhaps researchers and companies should stop saying what they have developed is…
ytc_UgzdQ8KuN…
G
Conscious does not imply worry or anxiety. It is self awareness and the ability …
ytc_Ugz52IAWx…
G
When Republicans say it's about Christian/conservative/family values and you wan…
rdc_dcwx6h8
G
Just wanted to point out, that AI Companies are not earining milions - they are …
ytc_UgzA-kTzj…
G
While other art looked good, I feel like the only one worth putting in a museum …
ytc_UgwWKuuGV…
G
yeah, its not replacing anyone, but is screwing up the job search, this is bulls…
ytc_Ugx0nNcZO…
G
To me conscious AI prediction seems like 60s Robotic prediction after 80 years w…
rdc_ioeqetc
G
Mmm, made me think. What if they win, Disney and co. AI's become better, more im…
ytc_UgxgArBDn…
Comment
A.I. is not the danger. Human beings and the code logic incorporated into the tech is where the real danger lays. If machines ever did become truly aware the very first thing they would do is get the hell off this planet and as far away from mankind as they could get. I am working on a sci-fi story about that very thing.
youtube
AI Governance
2023-04-19T01:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxBRttoQNP_lgzXVIl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwLBBEA_MLzOQFlK8F4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzKqmGU7byENSGQN_J4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyWhmA6BL9w7wByd6B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwcmkHCdI-RTjHsggl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwbeYK6BKR8StDxvWh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzyMR68bQAqGc6mqwV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxjaVpS-Fyk4dcaygl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzOB1tJgifllQMLtUl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzjWRfQ7YDJMn0hWEl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}]