Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai taking over is going to solve the major shortages in health care more people …
ytc_UgxzHwHWr…
G
Thank you for your observation! In the video, Sophia discusses striving to embod…
ytr_Ugy8WDA2s…
G
I saw it I asked Gemini about it but it either failed or sent a random link to a…
ytr_UgzM72eUp…
G
BTW I use it to vent. But remember. It's not your friend or tharapist! Seriously…
ytc_UgwE1LRPw…
G
So why does my ChatGPT not allow me to generate images? Is it an older version o…
ytc_Ugx-_w_Ot…
G
I got a feeling AI will be restricted if not outright banned if it takes jobs aw…
ytc_UgxIT86gu…
G
Gary asked good question about consumption of the power by AI. I would like to h…
ytc_UgwBz62BY…
G
I was at NYCC and I came across his table. I INSTANTLY spotted the AI.…
ytc_Ugy5a0WqP…
Comment
I think the idea of stifling progress is and should be anathema to us, as humans - it leads to an almost Warhammer 40.000 kind of backward reliance on traditionalism. To summarize my thoughts - I think the gains of progress outweigh the inevitable down-sides of this double edged sword. Like people complaining about how social-media like Facebook make people distant, by stopping AI progress we would be focusing on preventing this "down side" of a future technology and denying all the merits (and steps forward); instead we should be focusing on dealing with said issues once they arrive and reaping the benefits of the positive side (robot work force, for example). Let's not forget that humanity has had the technology for supersonic passenger flight (Concord) - and see how that turned out.
What is another interesting question is whether it would be ethical to create a race (and if they have sentience, they would be a race) of begins (AI) whose sole purpose is to serve us and take all the labor of our backs. But that is a whooooole different story : )
youtube
2013-12-05T11:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugj8AXUuhgfjUXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UghEfYIiBlCtyHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugg6pJ8sg8sIuXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugh4Izu1dFDCBngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UggStT0fkttiU3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgjHME_FVR-RjHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgiFPP6fP-f4CXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugjk-OLPfqT00HgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugj_Nwoh-nEukngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UggdtWoUYVl_S3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]