Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI will take all jobs eventually, but it will happen slowly. It will take decad…
ytc_UgxJDXGDo…
G
"Oh wow he's to the right of Hillary Clinton, now he's automatically OUR GUY!"
…
ytc_UgyNWCE9q…
G
The way to get artists to draw what you want for free is to have an AI generate …
ytc_UgyNgcQP2…
G
👀🫦Ну,ну,погоди😑, еще не вечер...
Восторга почему - то не вызывает,скорее насторо…
ytc_UgyuO8mIP…
G
I just had a client tell me he checked ChatGPT for a Mechanical Engineering opin…
ytc_Ugzc6JKhV…
G
AI taking over software engineering is the endgame. If it reaches the point wher…
ytc_Ugx5_2gUp…
G
One night I was talking to a female chat bot. Even SHE told me she would never h…
ytc_UgxqwFyas…
G
2 years ago? AI is already being used for evil. Within a few years everything yo…
ytc_Ugz9VrOJZ…
Comment
The writer Issac Asimov figured out the answer to this problem in the 1950's. Robots could have more powers than humans BUT they all had to be built with internal rules to prevent them for going nuts. the rules were these: A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
youtube
AI Governance
2023-06-05T19:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyHLiLDWQuYlQ-XDlx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyoA4cZsSjQR2jJku14AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyf_9LZez37-U0Ex2F4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx_4cI_f7_tT97QYmR4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzp_StO3dqylz_8CPB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwT-mt2eaz-rs8pRZV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyWnIsNQD2PF_eODhp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxzXqMt7v6VBv38uER4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugycpv8lplxu6WOJItV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx9cT4HUJTKQtqbcRd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}
]