Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Asmogold's statement made me really angry. The most ignorant, brainless thing he…
ytc_UgzOimJcJ…
G
If you don't like AI is because a)you are a designer b)you are blind and ignoran…
ytc_UgyIsBcfp…
G
They argue with you also and when you ask to speak to live person it snaps. The …
ytc_UgzVwr8aM…
G
Same gurl. Like yes I have people who would love to help me out but I don't like…
ytr_UgzMtxg8A…
G
AI cannot destroy us if we don't build the strong enough bodies they need to cau…
ytc_Ugz1mWWyd…
G
These kind of videos make people complacent. There is going to be turmoil. AI i…
ytc_UgxHUYU4c…
G
bro, the real reason people are loosing jobs is the economy is crashing. AI is j…
ytc_UgxF0rbpK…
G
AI lacks motives, the day when AI acquires motives... We're all doomed, but for …
ytc_Ugwkk1LD4…
Comment
I had hoped that science had a 'cyclic' (a steer-stick on a helo) to steer ai. For example, Select a category; let us use Hinton's medical imaging, I had at least hoped one could confine ai to 'parameters' via a hardwired control unit designed specifically for 'ai' with the idea of making 'ai' remain in its own lane, with the problem-solving power of 'ai' but only with the autonomy of an organ grinder's monkey on a leash... Confined to a category like Medical Imaging., or some other product category.
youtube
AI Governance
2023-05-11T14:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxdvM9RnKk8KKh4s794AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy_QRMLrXn9oWc1dvp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwavUW2VZ682Ga3Y5B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxV-EnOcgcuUO1jwr14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"unclear"},
{"id":"ytc_UgwWm81rf66qyKbmNQ54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyXBigIwW-qavjV8HJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx6amyQjg5kE6-dNdR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwqL8fxykLNTokI7UJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzLP4iPrYYL_w2wkyh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyWCE2hBuEa88pzWth4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"}
]