Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I want to see a robot drive and get out of the truck and deliver my Amazon boxes…
ytc_UgzWFl1Ts…
G
I'm fine with humans inspired by other humans because human learning is much muc…
ytc_Ugxmyg2g2…
G
Calling yourself a digital artist while using A.I. is like calling yourself an a…
ytc_UgwgCz4Qc…
G
I think you will be regarded not important and then will be eliminated eventuall…
ytr_UgyxfT7cW…
G
*THIS DOESN'T WORK*
1. It doesn't affect models that have been trained already.
…
ytc_UgyiQeZKr…
G
More like it won't matter soon either we are going extinct or they'll have their…
ytr_Ugw0EfAx4…
G
My largest issue with all of this is how none of this was built out of a moral n…
ytc_UgzMJ4uzo…
G
@meropticon_1651I could draw that doodle that the child made quicker than him bt…
ytr_UgzeNvo2T…
Comment
Would it be fooled by a hacker though? The question I would have is do these policies apply before submitting data or after it is reviewed? Is a DMZ network server involved where the policies are applied and data is reviewed before submitting to ChatGPT?
reddit
AI Governance
1684627100.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_jksgo9t","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_jkro1cf","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"rdc_jkytrfb","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_jkvut3l","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"rdc_jkss7ef","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"})