Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Y’all don’t even want to see my character ai chats y’all will end up bleaching y…
ytc_UgxnGSlFw…
G
That "So you can do other things" comment really put it into perspective for me …
ytc_UgznaPWvF…
G
They will steal your water rights. A constitutional condition. THEY WILL PASS TH…
ytc_Ugx419oiv…
G
I for one believe we should be developing AI for the future of humanity but this…
ytc_UgwM_btNc…
G
it's pretty simple;ChatGPT is for the masses,and is still in the process of bein…
ytc_UgxNEo1Wi…
G
As soon as you tell AI that the energy it runs on comes from fossil fuels, it wi…
ytc_UgwgUl3RB…
G
This is basically Montessori method repackaged and leveraging tech. Not saying i…
ytc_UgwWSX0JC…
G
You can't blame chat GPT for the mistakes in the wrong choice of a human being T…
ytc_UgykCyjuT…
Comment
@finnycairns6127 It seems so, but most of the people in the field that build machine learning models haven't got a clue how to make AI aligned with our interests. It is an open problem on how to do it -- we are far behind on the understanding of how these models work vs capability of how powerful they get. We don't even know how to ensure that the AI understands our goals (technical term: Inner Alignment) -- but even if we were, what would be the goal we would give to an AGI to make it act in our interests?
youtube
AI Governance
2023-05-17T04:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgxCcb86nnkWt4pW-kF4AaABAg.9oY-SbehkMZA5tZrjAjS_h","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugz_4uL68XEflzgsmqp4AaABAg.AOOgVTlWzF_AVT1dcVMYFy","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgzY3g868RP0sUhVc9t4AaABAg.ANXkw-RgebbAVT1HK-WmUU","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytr_Ugyxlb6OCotXBHPw7rN4AaABAg.ANE2uD72vldAVT0a8H-SxY","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugyxlb6OCotXBHPw7rN4AaABAg.ANE2uD72vldAVT0nYC3Luw","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytr_Ugz4exGNsbg8Xwuu7hp4AaABAg.AN9GrtnEQIUANG1p_F6Xh1","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgwWkqbHIEzsp5dv6at4AaABAg.9poTbhxQYI_9prl_C5QS55","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgyaW4tT9gJkU9kk7Y14AaABAg.9pnH7lNDcvg9pnMCaVvC9H","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytr_Ugw9QOB0a6S3_ROMy_V4AaABAg.9pn3_3MntjT9pn4wZNU7r_","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugw9QOB0a6S3_ROMy_V4AaABAg.9pn3_3MntjT9pnhwrZkWP3","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]