Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That pro-AI argument completely discounts the work and effort involved in refini…
ytc_UgzfS0EDI…
G
Many people are aware of president Eisenhower's parting speech as president and …
ytc_UgwATvThD…
G
well good thing he told is it's a robot or we would have thought it was a real w…
ytc_Ugwh8VFqc…
G
There will be no AGI in 5 years. And AI is currently used as a coverup for job o…
ytc_Ugz91_yAn…
G
Here's how I see the scale AI acquisition. Remember when there was a point in ti…
rdc_mz0db8d
G
This situation can of remember me the state in france before the revolution...on…
ytc_UgylmCFRs…
G
Well I have over three times the experience you have, and I'm extensively using …
ytc_UgzNGT6uT…
G
exactly?? like my guy... YOU'RE the unoriginal one here? where do you think the …
ytr_UgwUv13Xh…
Comment
Childcare is very important to have human beings work in bc one thing robots don’t understand us empathy, and they can’t help make them happier and help them with what they need. Also, I think it would be terrible for doctors and nurses too bc a huge part of that job is bedside manner and understanding how the patient feels in order to make them feel better and to figure out if certain things are phycological problems or physical problems. Also often there’s multiple options to what something could be, and u would need humans to think of that bc there isn’t just one answer to everything. Also I think it would be terrible to have lawyers as AI bc again it can’t understand empathy so it can’t fully understand reasons for why humans do certain things and sympathise with them and convince other people to sympathise with them too
youtube
AI Jobs
2025-09-13T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyHNZbk7pmcYMJ3dHt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz8pUb6bIkCCz2fxFN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzFkr5hFG7XWFwyp7p4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzkuTsYbpxl9_4nYN54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzyfYwQMQNXaUhcqlB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwOMqaA3q5LcXWtrzd4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz1Cv259JbPbfce3Ip4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz6mChHtosPY6RA1AN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxsXGaW7mVWpjw5rd94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugza5xAxt6j6XIsHdxp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]