Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People think that the current state of AI will be the state of AI forever, but i…
ytc_UgyCGVpII…
G
Such multi-layered manipulation is exactly what an AI could do effortlessly, onc…
ytr_UgxsxZwcY…
G
Alex Moldovan- Heres a scenario. You are following a truck with loose pipes and …
ytr_UgxmuAaWo…
G
This is sad. I hate chatbots. This administration is evil and funded evil greedy…
ytc_UgwGuDwne…
G
How can we be sure you're not just telling us what you want us to believe?…
ytc_Ugwp2XwTJ…
G
But the real camera vs lidar arguments should be based on sensor readout, cost, …
ytc_Ugwkchu5L…
G
Good video but the argument that AI cant produce art is flawed in my opinion. Th…
ytc_UgwyEp0FI…
G
Program A.I. to assist with jobs, and not take jobs!
Place Legal Human restrain…
ytc_UgxLGiu2_…
Comment
Similar arguments were made when the atom was split. Clearly a more real and immediate threat than AI and perhaps not the best analogy, but it motivated the world to ensure that controls needed to be put in place. I believe in humanity and that our preservation trumps technological advancement.
youtube
AI Governance
2025-07-31T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgyCLBnvRRW2NkIF-eB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugzg8ZN-A0jXFzfapHF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgywNEQUtuam7n9Eg_t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgwkL2crjPVckKNTi-N4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgwTTE2fXPu2lDSZyFd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_UgzKbXejLP_Zosm4lKZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},{"id":"ytc_Ugy-Fp8Dg6Vx9plRWKh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},{"id":"ytc_Ugx9DWWWUV3RIxzm-Tt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},{"id":"ytc_UgwPYaOTqr66o2fFqld4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgxXExKUrAeSNQHtD_d4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}]