Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Interesting video. I was kinda waiting for a video about AI from the depression …
ytc_UgzACBaVc…
G
It sounds so generic to me. Soulless. It’s not AI’s fault that we call this crap…
ytc_Ugxc_MEJs…
G
@yrurgrhhr because to run any kind of generative A.I, you use work that is not y…
ytr_UgxoZHjoa…
G
I’m fed up of all of these AI videos popping up and the people do a whole shitti…
ytc_UgyzfGWhZ…
G
God his constant jumping to people who don't like AI use wanting them dead is so…
ytc_Ugxpjp-yL…
G
For some reason this channel uses more AI than most others. These apocalyptic "w…
ytc_Ugx2D7CBG…
G
We need to make AI create more videos. Then get AI to watch them. Close loop. I…
ytc_UgzHzB_d4…
G
I think Mr. Yampolskiy sees things a bit too dark. :) I have been chatting with …
ytc_UgzRNQZL0…
Comment
I think you transcend the rules by compartmentalizing things into different contexts. You need to have a meta-rule about what context to use, or you use all the different contexts at the same time and see which ones work. But, I agree with Penrose ... we are being sold a bill of goods about AI in order to push a political agenda. AI = Automatic Inequaity.
I think also the "Turing Test" has been pretty widely discredited.
youtube
AI Moral Status
2025-08-03T15:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwDl_eMRq9ssn9Z4PR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_UgwKr6BXszu60qiwvUB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugw11mjqL43kXDL0n3t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgzqN0dWFin_cYEt9Ep4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_Ugxr_DaIiwOLbPTP9PF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgxovVkGpMX-5A4c0Ot4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_UgzBIx9GVETEtkoDGKJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgxfZv0TYLCq90SBlGl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgxFatn2O_6CngSS6Zh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},{"id":"ytc_UgztiAhzvBPg4LVbSiF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}]