Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
gov to openAI: please pause development.
openAI to gov: ok... but what will you …
ytc_UgwCRWvNV…
G
@zaNNy1604Sorry Bro, You may be Indian.. can't comprehend a sentance properly..…
ytr_UgxqPIuXi…
G
The gap between control and capability can be solved IF these "smartest people i…
ytc_UgwUt8F0s…
G
if it doesnt take you at least 5 minutes to make an outline for a drawing, then …
ytc_Ugy9AGX-B…
G
As we've seen before, they don't mitigate risk until people die and enough peopl…
ytc_Ugz4f4cE4…
G
@nikemwas2713 Thank you for the comment! AI has definitely taken "fake it till y…
ytr_UgwUin6TD…
G
Bro give us the bright side of ChatGPT and keep the loom and doom.dark side in t…
ytc_UgypoKm6F…
G
AI will never raise 85 million jobs. That is all wishful thinking. The whole pur…
ytc_UgzX1cGV3…
Comment
1. Weak AI ( our current technology)
2. Strong AI- self aware AI, human level intelligence
3. Super AI- very scary, genius level- human level intelligence, with the ability to make itself even smarter, analyze, process, interpret and think hundreds if not thousands of times faster than any human ever could. Think of Terminator meets iRobot on steroids.
I do believe at some point this century will reach strong AI. Siri and that robot that beat that world-class chess player are excellent examples of weak AI. Trust me those computers will become smarter over the next couple decades. Ray Kurzweil the technologist, is estimating year 2045. I believe 2045 at the earliest. A lot of people from the technology community did a survey and there's a 80 percent probability that it will occur within the next hundred years.
youtube
2015-07-30T06:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UggHC7rLa4Gu0XgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UggqZ-Bfm6zNFXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgiRHZUgZugRGHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UghIxxuueQpi6ngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UghHmu3UOIYh5ngCoAEC","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjviAibkxEovXgCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugg7AFhd6A9w3ngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugix_u8m5HqkxXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugi2IphvaEHTxHgCoAEC","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgjXp-Uti9IrE3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]