Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I work in a no name family run company that has a few CRUD apps that were made b…
rdc_jprf247
G
This is your daily reminder to always steal AI "art" and draw it for yourself…
ytc_UgwGs8nId…
G
My advice to Who ever reads this : Do not add personal data to AI and be careful…
ytc_UgyPnsTXF…
G
2:48 seems like pretty sketchy logic. If AI makes workers 10 times more producti…
ytc_UgxBLJhFP…
G
This is like super micro managy manager on steroids, literally watching you your…
ytc_UgwN0-e0a…
G
Ugh looks like ChatGPT developed clinical anxiety.
Does anyone know how to cod…
rdc_jvn8ak8
G
Creepy af, too many movies/books have shown how it's a terrible idea to grant wi…
ytc_UgyUUj_gk…
G
I think the push for autonomous trucks may be in response to a few different rea…
ytr_UgygtfUhO…
Comment
The fact Biobank data for hundreds of thousands of UK residents is on sale on Alibaba is the latest example our data cannot and will not be kept safe.
I know we have laws that allow anyone to photograph or video us in public places BUT hardly anyone that might do that stores massive amounts of data and can link it to our identity data.
Also, we keep hearing and seeing examples of where AI gives errors. Researchers suggest in some important areas it is wrong 25% of the time. We know people have been stopped by Police or by shop security because LFR has flagged them... wrongly. Why are we assuming guilt instead of innocence and why are we letting a hackable system with known failures do that? And can we guarantee there will not be bad actors accessing the data via legitmate routes (like they did with Biobank)?
youtube
2026-04-23T21:1…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzVidiSQGuTllLe61t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwuqfO9z_gjpiKxVIF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyuIntLAnP34aUoz5R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx8FxNhdq59fO8fWsV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwx3botcPFM5Mw64nZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw7M43DPCRpUc_Tqx54AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwOw9oHrDpRO8nd_EZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugy_1jM6LFkIJthmzah4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzkYPstWd0rdRIpfQl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx1az0sUqa3zo00Mep4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}
]