Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI's social media fix:
1. Ensure algorithm data is diverse, representative and …
ytc_UgxMDCQ_x…
G
Stop creating robots and know that AII in some aspects is needed in certain are…
ytc_UgwxT46nq…
G
As soon as i found out about deep fake a few years ago i KNEW it would be used f…
ytc_Ugw7nolWj…
G
If the only criterion , by which we allow ai in the work field is profit and dol…
ytc_UgynqU1AF…
G
the only way to end the poaching is to end the demand.
this seems like an educa…
rdc_deubk53
G
A possible fix for ai, is make sure all AI's think in Englisch or other known la…
ytc_UgxAOgc_p…
G
This is literally just fear mongering. Reading the title of a news article or re…
ytc_Ugzs3aKCB…
G
« I like to read a lot about neuroscience » I 100% believe you, but then why do …
rdc_mdmybdw
Comment
I am not an educated person, I have suffered with mental health for most of my life and I only work part time whilst claiming welfare benefits. There is mental health pandemic that is only getting worse. It is estimated that a billion people globally suffer with a mental health condition.
If many jobs will be made redundant by 2030, doesn't it make sense to introduce UBI on the condition that many of the people that will become jobless can invest in studying to become mental health counsellors or psychologists (other fields as well where AI won't effect specific industries but primarily healthcare) in order to address the mental health pandemic?
Because I thoroughly believe AI won't be sufficient enough in providing this kind of care for an individual and requires a human to do so in terms of emotion, empathy and pragmatism. 🤷
My opinion is based on the present and the possibility of the next 5-10 yrs regarding AI. I have no opinion on conflicts, war, politics or conspiracy theories as I don't know enough on these subjects.
youtube
AI Jobs
2026-01-22T12:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzzkyFXYdj95mjxm1h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxRWA1qiGku-_DyN1t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzZW0BWf3pywtcAUTx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz9y6cCzKSHFfy1A0B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwjRv6xF5xiH8YRnYV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw1Ni9UhXKNoldYQIh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzGYnxlN24NfCVFo9d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgwJK140Ri_VrAAcZIN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgxSvUWxzMEZBW1WNgx4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxuUHaHQYrM5Y2HPRt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]