Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
what's better than a world where robots and ai works for us? we just will have u…
ytc_Ugx7Z7Zgl…
G
> **Two 'safety' drivers were in the front seats** of the Uber car, which was…
rdc_dfey9h8
G
AI strategy means simply gettin rid of employees not needed anymore. NO ONE is s…
ytc_UgyT2MEUO…
G
, fuck outta here with that bull shit, people who think this way should have the…
ytc_UgwW5vXpy…
G
When I got my bachelors degree in 2018 one dude yelled after he got his degree “…
ytc_UgxRomQ1O…
G
It's so unfortunate that humans have been deceived on what THINKING and WORK are…
ytc_Ugw8LCirU…
G
He’s the AI grandfather yet. He thinks there’s two parties and the politicians a…
ytc_UgwzQCq5_…
G
Unfortunately they dont take into account. Most shit in life will still require …
ytc_UgxlhGVeK…
Comment
The idea is that it could be a remedy against competing tribes which fuels the dangerous unchecked AI race. So that humanity can be united in developing safe AI. Barreling ahead towards possible extinction is his worry, not that he wants a human world dictatorship.
youtube
AI Governance
2025-07-02T12:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgyCLBEJAZHeRl4oaYV4AaABAg.AK-wckndrYCAK3sdOPNRza","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytr_UgzMwA-2As4laPuRoVp4AaABAg.AJz3mGo7qjuAK598EIrC3o","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgyENOH579RxGauK-Fd4AaABAg.AJxQi3GpjsJAJyNJyA82pX","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytr_UgxGcR7_R7Qvr8MeJN54AaABAg.AJxDAzLcnKoAK53ZbbSD8N","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzQ4G3PwmHBfIFbOoV4AaABAg.AJwsdfgS75tAJwtX_TBG5C","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgwRzp1RMgsGaXXIHYR4AaABAg.AJwRrAGjn4UAK4EOuziUyG","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgxZT6LuDvkTW0mRrNt4AaABAg.AJwL-E2cvILAK4FBe6qO1x","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytr_Ugz8kXJ54R54LGC4pDl4AaABAg.AJw8sFdpsn3AK4L8pyijZP","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytr_UgxuKqS5SR4qTNHORi94AaABAg.AJuhcfLiaW9AJv0uOzmZHv","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyCLi-a9qrVIhXsxQB4AaABAg.AJu8hia4-F-AJv5cYcGsPA","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]