Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Now do the calculations on uploading this video about ai. Then all the different…
ytc_UgyEARzUF…
G
🥲 bro Disney did what you did didn’t want them to do. They now are making their …
ytc_Ugwv13KHC…
G
LLMs run of a dataset compiled from the internet. The dataset is corrupt from th…
ytc_Ugw50tZd_…
G
It found tickets and did some shopping! WOW unreal! This is not AGI this is lite…
rdc_n3ttvwc
G
Feels like all the holywood unions need to be banding together to stipulate that…
rdc_jj5rj4d
G
What they are doing is this: the product is built with sticky emotional resonson…
ytc_UgzG0i_tv…
G
@KiokiIch1-xp Biden is one of the primary architects of the unconstitutional war…
ytr_UgyysYHg8…
G
I love when ai techbros talk about disabled people for the first time in their l…
ytc_Ugz_WgA6D…
Comment
No, not because of reasons. If you provide an LLM with sufficient context, you can get the exact opposite answer by conditioning the model with one-sided argumentative input and asking it to summarise the argumentative balance. In other words, if you constrain the LLM to your arguments, you can move the needle in the direction you want. That's what Knowles did here. ChatGPT was being given arguments in favour of God's existence. Had Knowles provided nothing but arguments against God's existence, the percentage would have fallen.
youtube
2026-01-01T17:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgzEA3acZ-l9c1rl7nJ4AaABAg.9pmSYRZ_aox9q-B9qxX-oQ","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgxqPt6Ei4yeeYfjnqt4AaABAg.9pmSSeEVBSP9pmUClAwSE_","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxqPt6Ei4yeeYfjnqt4AaABAg.9pmSSeEVBSP9pmV1gpYZlP","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgyJ-g4s0YCK3ww-udB4AaABAg.ARcYd3-3BcHARhhqklV7Gc","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_UgwvFM9vI6CDiSqRt214AaABAg.ARVPc8reC8aARVQ19ORiZF","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytr_Ugx3-6f3SPllSDPEw7h4AaABAg.ARQ6IAa2mZHARS4M5_etBS","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyqLx-ZGqZRaByzpcZ4AaABAg.ARPbenVSo9UARSS35qATVR","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgxRSwpMwJBU8t6gmSp4AaABAg.ARPRq44tQOuARPWU2xsmAY","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyqO5BgJLuUqdy_NhB4AaABAg.ARPOTl2SBFFARS2xh2ku_Q","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_Ugx1Xt-9E2fJbIKjacx4AaABAg.ARPORyBS6mhARQLqyQLIjY","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]