Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You could have saved alot of Money and doing all things by yourself within just …
ytc_UgysZmWVE…
G
i’ve had ai artists say the same thing to me before, calling me egotistical beca…
ytc_Ugxp7ZL6t…
G
bro when i scrolled down there was a video where a guy lost his job because of A…
ytc_UgxmKy_7G…
G
Help us put the groceries away?? 😂😂😂 you really are gonna need a robot to put gr…
ytc_UghCn0ip8…
G
So you are admitting Waymo has already lost. They have no time to gather enough…
ytr_Ugyqk4TP3…
G
Hacked? Who knows. That is why self driving should not exist. There should be …
ytc_UgzRIqqsx…
G
I'm not saying that we should not be concerned about AI being a sentient being b…
ytc_Ugw8HYw-d…
G
So I just tried this, by putting the statement "I am unlovable." into chatGPT an…
rdc_jif12pj
Comment
Several regulation should be implemented.
Things like:
No single AI should have access to multiple systems. Example: if the AI controls the driving of a car, it should never have writing access to talk with others cars or even systems like streets lights.
AI should only exist to give decision support or never full control a system. this way you will always have a human with critical thinking monitoring the results.
No full autonomous AI should be use in weapons. Building or research anything about this is a declaration of war to every country on earth. This ofc would need most of the countries in the world to make aggreamens ike we did in ww1.This is in line with the previous point but for war.
youtube
AI Governance
2023-04-18T15:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwH3io5ZzsUQDYGVkR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwM-a8chlR3Tsgs0O94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx6hjVY0cqhqvA-ICV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyH3jmAEq5Nwf8rh014AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx5avQuJAhNSYuQl-t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw0yODaQrHb8p3cm394AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzj2mdmbK47VTNn1XR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzg4zJx2URE9Una-CR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyvEagXlnafSG6Q-IR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwlQVFwTsdcWslAePR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]