Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I would love to have a fully self-driving RV. It would be great to go to sleep i…
ytc_UgyLaVrWn…
G
Ai will not make more jobs then then it takes. Any other job will be so low pay…
ytc_Ugz1hOeI-…
G
Ai is already taking millions of jobs, speed running homelessness, keeping peopl…
ytc_Ugwv-1T_2…
G
Fun fact: if a normal driver were taking that turn, i think the human wouldve fl…
ytc_Ugyuw2sVz…
G
a lot of people think that if AI replaces everyones job, we can go to a post-sca…
ytr_Ugzi-059k…
G
Sometimes I need to catch myself. I can just code that, don't ask chatgpt, it'll…
rdc_jigx6pp
G
The only way we replace every single job with AI is if it's sentient; and that i…
rdc_nclcpqp
G
Terminator is real, the biggest AI surveillance company in China is actually cal…
ytc_UgzYki8ll…
Comment
In complex things AI is worst to even understand what you want i am using AI FROM CHATGPT launch time in coding and other task but they are not able to do any new original task they only able to generate what they feed from people work and made lot of mistakes so you correct them sometimes overly generate things you lost track and wast lost of Time.
Agentic AI if it will get success in future it will work on one system but it you want to connect multiple systems in real world with complex problems it will messed up .
But one thing is real what we already feed it it is good to instantly summarise these topics from different sources this will help in education and all over other industries and reduce time and making people more efficient
but it will never gain people like logical thinking even in agentic AI they are going to terrible in future i listen mostly all the ceo from Google to open ai to anthropic they doesn't have very convincing answer.
But we have to positive like they are able to already existed work 20-25 % efficient
youtube
AI Jobs
2026-02-23T11:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyUpvOssUCVEtO-tfl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxV9lxZp--5LkCNpAF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgynwwbhlYy2hHg1wsB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzyOgNsClBeE9AvXS54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxucrlZSs9-Bxo2vW94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwAdVC8ORQ9TMhoQcR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxHTk3hEPoB5FAfHtF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwVG267bc3D8xIcDOp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy0mvsMLGhDv1Tp0wJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzIbROfZuFM4lI2JMB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}
]