Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You realize thats not themain problem here right? Its the fact that they are usi…
ytr_UgzZloaNd…
G
There r better pics than what u printed out. And yeah - itll democratize art for…
ytc_Ugx4K6U6i…
G
No I would never vote for a robot but I fear there are a lot of unhinged individ…
ytc_UgwjYXhBa…
G
What version of AI is this? I thought AI can’t just follow rules you make. At le…
ytc_UgySpWRqQ…
G
The fact that they would’ve not gotten caught if they just generated their own A…
ytc_UgwK4s1jA…
G
She's an elitist hack. All these people are. None of these writers can use AI al…
ytr_Ugwya4qBz…
G
Ai insisted beccines were safe & spouted the government rhetoric based on Big Ph…
ytc_UgzjOfBpf…
G
It looks like there's a bit of confusion there! "Sophia" indeed means wisdom in …
ytr_UgyP4qNrE…
Comment
For the jobs of the future, what about the safety of the human species? If it will not be transparent about it 100% or to the degree that we would morally and ethically be connected to it as humans, would that be a viable job position that would be expected to have value? Essentially predicting that the small sector of human jobs would go towards looking out for the safety and viability of integration of humans and our new found AI life in all aspects of our society. But with the theoretical possibility we make it there, it was our doing in the first place. Leading to the question, are we even capable of those positions or is it a select few that would have the viable ability to truly speak for the human species, or is everything just a chaotic mess and this was bound to happen with the pursuit of convenience and happiness as a symptom of not needing to fight for our survival on a basic level against our environment and look for new ways to “better” our meaning or experience in life?
youtube
AI Governance
2025-09-05T13:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzmAUl5XznZzdtvLQd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwtnZU5mk6jbGGgEmJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxTa26iKExnaAsDMeR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx_ec-ogCndToDJ4NN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgwyZ9aMGusdPS8iSb94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytc_UgxF-ztKfsy5XxDmSP54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxA9G8bz5IYFo4ALlF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwwKVVAOO6tuBmV41R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwyqu9vjUTmqtlQiX14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgztC9J4odrmHDGUaKR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]