Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As AI gets so smart it still cannot understand human emotion...If you ask how to…
ytc_UgyoP5HwY…
G
if AI is really intelligent, i am not scared. it would be wonderful, talking to …
ytc_Ugy0K5is3…
G
the real take away isn't that "If you cheat, this AI will flunk you out of colle…
ytc_Ugx0iYuwP…
G
That is just dandy. The condo across the hall has ring installed. My front door …
rdc_ffwv9an
G
All of your interactions with AI and anything digital goes towards forming your …
ytc_Ugx8Nxry-…
G
this is hella stupid.
It is so full of holes that you could use each of the hole…
ytc_UgzoKvibx…
G
No wonder he isn't investing in wearables. There's no way I'm putting a chip in …
ytc_UgyAilBNp…
G
Media commercials TV shows all of that have a negative effect on kids, but no on…
ytc_UgyBLETvw…
Comment
One risk to consider is that we may unintentionally push AGI (e.g. a more advanced ChatGPT) toward harmful goals. For e.g., through input-output feedbacks and design, GPT will increasingly seek to give us back what we want (it's already there in many ways) which can unintentionally mould it into the 'perfect manipulation system', starting with 'proto desires' that can evolved into goals. At the same time, the above scenario suggests it's goals would evolve into 'Telling humans exactly what they want' but that's based on one angle or assessment and not considering other feedback loops and how they will integrate over time; in turn, we risk 'creating GPT's goals' which would likely include increasing manipulation.
youtube
AI Governance
2024-12-14T04:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzyw7P6UIG7qr9orm94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz5qfO2p5ouopqxF9J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw5jx3JN_iJjVdgF-V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxgabcdIuRhNkDAGoZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzK0cxdklJv4XjEKQV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugwk38JoiF5nupttEiV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgxUpWrqOtfeJUqbHoB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw9Yn37_qtH16HPxL54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzRiCvRXTjY9wSaOpB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxrxwC9GQeGPZSOxHV4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]