Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Folks this world is about to end very quickly...I say this, if you Do Not! Know …
ytc_UgzlvGwNm…
G
Why are you making these robots? They will act against humanity. Isn't that clea…
ytc_UgxAOHaHF…
G
Oh come on, theft is the wrong term.
And in today’s world fast food, restaurant…
ytc_UgyVvO24Z…
G
As a novice, my experience with AI, especially Gemini, as it relates to coding h…
ytc_UgzxngA34…
G
But let's just say someone can make unique art with AI and they consistently win…
ytc_UgwNXm9GH…
G
> but investors have not been doing enough
Understatement of all time.
>…
rdc_et8b2b5
G
If A.I. gets smart enough, it can launch nukes and whatever middles are ready to…
ytc_Ugz21tMrG…
G
If you ask AI how much water one person searching up 1 question on AI is...the a…
ytc_Ugxy2qWFH…
Comment
I don't know, really, but I think the way this usually works is that the highest-risk category prevails. So a chatbot that provides medical advice would be in the risk category for providing medical advice, if that is the riskiest thing it does. An AI that is a chatbot, but also knows everyone's face, tracks their activities, and manipulates their behavior would be prohibited, because its most dangerous capabilities are prohibited. It doesn't matter that it can also chat with you.
This is how other dangerous things are regulated -- a bomb can have innocuous uses as a doorstop or paperweight, but that's not important -- what's important is that it can blow you up. Similarly, an AI system that has many capabilities should be regulated on the basis of the most dangerous thing it can do, with possible additional precautions if it has other capabilities that are also dangerous.
youtube
AI Governance
2024-08-03T14:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgwOPOoKViMgXaGBLMd4AaABAg.A791GIkyX-SABJuR5EmNrb","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgwecTIVWGF1ZtQVAhh4AaABAg.A76RS4N1AOLA9wkFPU8kzI","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgwecTIVWGF1ZtQVAhh4AaABAg.A76RS4N1AOLA9wkUXYn5g0","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwecTIVWGF1ZtQVAhh4AaABAg.A76RS4N1AOLAAsItQpuE7J","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgwUzvO0lMKFVTwvJrJ4AaABAg.A70RvGdf2hrA8Th9Vs1Tdf","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyF7TlzBbLdWLhHxa94AaABAg.A6zuvGSZjE7A70R3TtN6mb","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytr_UgxoW9FFG-sV9r8-YDl4AaABAg.A6zlAQGfKDPA8E5WhprqBf","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgzyqHWkk1pH4lG2_8R4AaABAg.A2EtlyQhtu-A6fyUxKrFwi","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"unclear"},
{"id":"ytr_UgzPWCu1w57TDg9GjM54AaABAg.9whplHrPqDbA3_DrjtfUL_","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgzPWCu1w57TDg9GjM54AaABAg.9whplHrPqDbA3aV9J0UoiL","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]