The technology industry has spent three years warning the world about AI hallucination, the phenomenon where artificial intelligence fabricates facts, invents citations, and generates confident nonsense. That warning is valid, and AI hallucination is real, documented, and dangerous when undetected.
But it is not the most dangerous data problem in public discourse right now.
The most dangerous data problem in public discourse is human hallucination, a term introduced here to describe the act of taking data, stripping it of methodology, removing its context, ignoring its instrument design, and sharing it as settled fact to audiences who have no training to evaluate what they are reading. Human hallucination does not require a language model. It requires a share button, an opinion, and zero accountability for the downstream consequences of distributing unqualified information to millions of people who will absorb it, react to it, and redistribute it without ever asking whether the data said what the poster claimed it said.
AI hallucination produces a wrong answer that the user can verify, while human hallucination produces a wrong interpretation that the audience has no mechanism to catch.
The Case Study: An Ipsos Survey Goes Viral
In March 2026, the Ipsos polling firm and the Global Institute for Women’s Leadership at King’s College London published survey results ahead of International Women’s Day. The headline finding: 31% of Gen Z men agree that “a wife should always obey her husband,” a rate more than double that of Baby Boomer men at 13%.
The study surveyed 23,268 adults across 29 countries between December 24, 2025 and January 9, 2026, using the Ipsos Global Advisor online platform. Sample sizes ranged from approximately 2,000 in Japan to 500 in smaller markets. The credibility interval is +/- 3.5 percentage points for samples of 1,000 and +/- 5.0 for samples of 500.
These are the facts, and what happened next on social media had nothing to do with facts.
The headline number, 31%, spread across LinkedIn, X, Facebook, and every platform where people share statistics they have not qualified. Professionals with tens of thousands of followers posted the data point as evidence of a generational shift backward on gender equality. Comment sections filled with moral outrage. Threads generated hundreds of reactions. The conversation became a performance of values rather than an evaluation of evidence.
In the observed viral activity, not one post addressed the instrument design.
What the Instrument Did Not Ask
The survey asked respondents whether they agreed or disagreed with the statement: “A wife should always obey her husband.” It did not ask whether a husband should always obey his wife. It did not ask what respondents meant by “obey.” It did not provide a reciprocal framing that would have tested whether respondents were expressing a belief about subordination or a belief about mutual respect within marriage, and it did not survey for context.
The same survey did include a related male authority item asking whether “a husband should have the final word on important decisions made in his home,” and 33% of Gen Z men agreed with that statement as well. This adjacent question points in a similar direction, but it is not a reciprocal question testing the inverse (“should a wife have the final word?”), and both items share the same unidirectional agree/disagree format that makes them susceptible to the same methodological vulnerability.
The absence of reciprocal framing is not a minor omission; it is the entire difference between a finding and a provocation.
Consider the inverse reading that the viral posts universally ignored. If 31% of Gen Z men agreed that a wife should always obey her husband, then 69% did not agree. If 33% agreed the husband should have the final word, then 67% did not agree with that position. The supermajority of Gen Z men declined both traditional statements. The viral narrative selected the minority agreement and treated it as the story while ignoring the supermajority disagreement. That editorial choice is itself an act of human hallucination.
The word “obey” carries additional contextual ambiguity that the instrument made no effort to resolve. In any marriage where two people reach an impasse on a decision that cannot be deferred, someone has to make the call. If respondents interpreted “obey” as “defer to in unresolvable conflict” rather than “submit to in all circumstances,” the same data point describes a conflict resolution preference held by a minority, not a subordination mandate, and the 69% who declined to agree become the story the viral posts never told. The survey did not distinguish between these interpretations, and neither did anyone sharing it.
There is also a body of research that the viral interpretation entirely ignored: women’s own stated preferences for traditional arrangements. Gallup polling has tracked this question since 1992, and even at the record high for women preferring to work outside the home (56% in 2019), 39% of American women still preferred the homemaker role. In that same 2019 Gallup poll, half of women with children under 18 preferred the homemaker role, and in earlier survey years (2015) the figure reached 56% for that subgroup. A September 2025 19th News/SurveyMonkey poll of 20,807 U.S. adults found that 40% of women agreed society would benefit from a return to traditional gender roles, and 58% of women agreed that families are better off when one parent stays home. The 2025 IFS/Wheatley Institute Women’s Well-Being Survey of 3,000 women conducted by YouGov found that married mothers reported significantly higher happiness levels than single childless women, married childless women, and unmarried mothers, after controlling for age, income, and education. These are not fringe findings from obscure sources. They are published data from Gallup, YouGov, and SurveyMonkey showing that traditional family preferences are held voluntarily by substantial percentages of women across the population.
The viral interpretation of the Ipsos survey treated agreement with a traditional statement as inherently threatening. These findings on women’s own preferences do not vindicate the specific “obey” response, but they do show that traditional family preferences exist across gender lines, which complicates the narrative that agreement with any traditional statement is automatically a signal of subordination. The research base on women’s own preferences shows the picture is more complex than the headline allowed, and the instrument made no effort to capture that complexity.
The survey was conducted to mark International Women’s Day, and the framing, question selection, and publication timing were organized around a specific advocacy calendar event. This does not mean the data is fabricated. It means the instrument produced a directional result, and the absence of reciprocal framing ensured that the result would move in one direction only. The signal may be real, but the social interpretation traveled farther than the instrument can responsibly carry.
Acquiescence Bias: The Known Methodological Vulnerability
The instrument’s directional design is not only a framing problem; it intersects with a known measurement vulnerability that compounds the inflation risk.
Survey research has a well documented vulnerability called acquiescence bias, also known as agreement bias. It describes the tendency of respondents to agree with a statement presented to them regardless of its content. Research consistently shows that agree/disagree question formats inflate affirmative responses, particularly among respondents with low prior information on the topic or low motivation to engage critically with the question.
YouGov’s own survey experiments showed that poorly designed instruments can produce measurements of public opinion that are, in their words, “inaccurate at best and completely misleading at worst.” The experiments found that agree/disagree scales falsely inflate support for a given position because respondents have a greater natural tendency to agree with a suggestion than to disagree with it.
Hill and Roberts (2023), published in Political Analysis by Cambridge University Press, found that acquiescence bias can inflate estimated prevalence of certain beliefs by up to 50 percentage points. The effect was most pronounced among respondents at ideological extremes and notable among younger respondents. A 2025 corrigendum corrected a coding error affecting demographic correlations in the original study, but the core findings on acquiescence inflation survived the correction intact. The researchers explicitly advised that instruments using agree/disagree formats without counterbalanced question wording produce systematically distorted results.
Acquiescence bias is not the only measurement vulnerability operating on this survey. Social desirability bias, the tendency for respondents to give the answer they believe is socially acceptable rather than the answer they actually hold, operates in the opposite direction and interacts with the generational comparison in a way the headline finding does not acknowledge. Baby Boomers and Gen X came of age in environments where certain views on gender carried professional and social consequences for expressing them openly. Decades of conditioning taught those cohorts which answers are safe and which are not. Gen Z came of age online, in environments where unfiltered expression is the default and where social media rewards candor, provocation, and authenticity over diplomatic filtering. If social desirability bias suppressed Boomer agreement (pulling the 13% lower than actual belief) while acquiescence bias inflated Gen Z agreement (pulling the 31% higher than actual belief), then the headline finding that Gen Z men are “twice as likely” to hold traditional views may not be measuring an attitude shift at all. It may be measuring the difference between a generation that filters and a generation that does not, on a question format already known to inflate agreement. The instrument cannot distinguish between “I believe this” and “I am willing to say this out loud,” and that distinction is the entire basis of the generational comparison the viral posts treated as settled fact.
The Ipsos survey used exactly this format. The statement “a wife should always obey her husband” was presented for agreement or disagreement. The standard methodological remedy, including an equal number of positively and negatively framed items to force deliberate engagement, was either absent from the instrument or not reported in the published methodology. The full questionnaire has not been located in publicly available Ipsos materials, and the cross-tabulation tables are available only upon request.
Every professional who shared the 31% figure without noting this vulnerability distributed a data point with a known methodological weakness to audiences who had no way to identify that weakness.
The Viral Amplification Problem
A methodological weakness contained within qualified circles is manageable. A methodological weakness distributed to millions through viral sharing is not, and the platform architecture ensured the Ipsos data did not stay contained.
The 2018 Vosoughi, Roy, and Aral study published in Science found that on Twitter, false news cascades reached depth roughly ten times faster than true news, and true stories took approximately six times longer to reach 1,500 people. When a provocative data point goes viral, its correction never reaches the same audience. The outrage is louder, stickier, and more interesting than the methodological caveat that should have accompanied it.
On LinkedIn specifically, the dynamic is worse. The platform is positioned as a professional network where data sharing carries implicit credibility. When a senior professional posts a statistic from a recognized polling firm, the audience receives it as vetted information. The share carries the authority of the poster’s title, their follower count, and their professional reputation. The methodology note that should accompany the statistic is invisible because the poster never read it, and the audience has no reason to go looking for it.
The result is a cascade of professional authority endorsing unqualified data. Every share, every reaction, every comment that treats the headline number as settled fact adds another layer of social proof to a finding that cannot support the interpretation being placed on it. The algorithm rewards engagement, the engagement rewards outrage, the outrage rewards simplification, and simplification is where data goes to die.
This is the echo chamber mechanism operating on data instead of politics. Geoffrey Hinton cited echo chambers and algorithmic polarization among the risks he warned about when he resigned from Google in 2023 to speak freely about AI safety. Governing AI: When Capability Exceeds Control (Puglisi, 2025) organized his publicly stated warnings into seven threat domains, with Chapter 3 examining echo chambers and polarization in detail: engagement-optimized algorithms amplify outrage for advertising revenue, making polarization profitable and therefore persistent.
Facebook’s own internal documentation, as reported in the 2021 Wall Street Journal “Facebook Files” series and the subsequent Frances Haugen SEC disclosures, confirmed that recommendation systems funnel users toward extremes because sustained emotional engagement generates ad revenue. The conclusion from that analysis holds here: polarization operates not merely as a social problem but as a governance incapacity indicator. Institutions failing to govern echo chambers at social media scale show patterns that predict failure at every other scale, including AI governance. The Ipsos viral cycle is a live demonstration. The unqualified statistic generated outrage, the outrage generated engagement, the engagement triggered algorithmic amplification, and the amplification generated more outrage. The same loop that radicalizes political discourse is now radicalizing data interpretation, and it runs on identical platform infrastructure.
Research on misinformation sharing suggests the problem is not limited to obviously fabricated content. It includes real data from real sources shared without the context required to interpret it correctly. The Ipsos survey is legitimate research from a legitimate firm, and the data point is real. The methodology is defensible within its stated parameters. What is not defensible is stripping that data from its methodological context and distributing it to millions of people as evidence of a cultural conclusion that the instrument was not designed to support.
Human Drift Is Worse Than AI Drift
In AI systems, drift describes the degradation of output quality over time as a model’s responses shift from accurate to unreliable. Hallucination describes the generation of confident fabrication. Both are documented, studied, and increasingly detectable through cross-platform validation and human checkpoint governance.
Human drift operates on the same principles but without the detection mechanisms.
When a person encounters a statistic that confirms their existing beliefs, they share it. The act of sharing is not preceded by methodology review, instrument evaluation, or source qualification. The statistic enters the person’s worldview as fact. Over time, as more unqualified data accumulates, the person’s analytical framework drifts from evidence-based reasoning toward confirmation-based collection. Every statistic that passes through without qualification reinforces the habit of accepting data at face value, and every share that generates social approval reinforces the behavior of distributing unqualified data to others.
This is human drift, and it is slower than AI drift, harder to detect, and more dangerous because the human doing it believes they are being data driven.
Human hallucination follows the same pattern. A person reads a headline statistic, constructs a narrative around it that the data does not support, and presents that narrative as the data’s conclusion. The Ipsos survey found that 31% of Gen Z men agreed with a specific statement. The human hallucination is that this finding proves Gen Z men want to subordinate women. The data does not say that. The data says that 31% of Gen Z men selected “agree” on a unidirectional statement in an online poll conducted over sixteen days during the holiday season, while 69% declined to agree. Everything beyond that is interpretation, and interpretation without instrument evaluation is hallucination by another name.
The same professionals who warn their audiences about AI hallucination are committing the human equivalent every time they share a data point they have not qualified, and the irony should not be lost on anyone watching.
The “Data Driven” Illusion
The most dangerous phrase in modern professional culture is “data driven.” Not because being data driven is wrong, but because the phrase has been adopted as an identity marker rather than a practice standard.
Being data driven requires qualification discipline. It requires asking where the data came from, how it was collected, what the instrument looked like, what questions were and were not asked, what the sample represented, what the credibility interval means in practical terms, and whether the finding supports the interpretation being placed on it. That discipline takes training, it takes time, and it takes the willingness to slow down before sharing.
Most people who describe themselves as data driven do none of this. They consume data but do not qualify it. They share statistics because the statistics support positions they already hold. The data becomes a rhetorical weapon rather than an analytical input. The share button replaces the methodology review, and the engagement metric replaces the credibility assessment.
The professionals who shared the Ipsos headline on LinkedIn include people with PhDs, MBAs, MPAs, JDs, and every other credential that supposedly certifies analytical competence. The doctorate after a name is supposed to mean the holder can evaluate methodology. When those same credential holders share a headline statistic without qualification, they are not failing because they lack training. They are failing to apply the training their credentials represent.
And the failure does not stop at the consumers of the data. King’s College London, the academic partner on this survey, is a research institution with doctoral programs and faculty who hold the very credentials that are supposed to enforce methodological rigor. The Global Institute for Women’s Leadership at King’s Business School published headline findings from an instrument with a known methodological vulnerability, a unidirectional agree/disagree format with no reciprocal framing, timed to an advocacy event, without prominently disclosing those limitations to the public audience that would consume the headlines. The institution whose name is supposed to certify research rigor participated in the same qualification failure that every LinkedIn poster repeated downstream. If the credentialing institution itself does not enforce the standard, the credential carries no operational weight, and every PhD who shared the result without qualification proved it.
The Factics Standard: Every Fact Must Lead to a Tactic, and Every Tactic Must Leave Evidence
The Factics methodology, developed through over a decade of consulting practice and published in the Digital Factics series beginning in November 2012, provides the structural discipline that viral data sharing lacks.
Factics operates on a three-part formula: Facts + Tactics + KPIs. Every significant claim requires a fact grounded in verifiable evidence. Every fact requires a tactic, an executable action rather than a suggestion. Every tactic requires a KPI that converts the intended outcome into something testable. The loop forces clarity at every step.
Applied to the Ipsos case, the Factics standard exposes the failure chain:
Fact: 31% of Gen Z men in a 29-country online survey agreed with the statement “a wife should always obey her husband” during a poll conducted December 24, 2025 through January 9, 2026, with a credibility interval of +/- 3.5 to 5.0 percentage points depending on country sample size. The same survey found 69% of Gen Z men did not agree with the statement.
Missing qualification: The survey used a unidirectional agree/disagree format susceptible to acquiescence bias. No reciprocal question was asked. The instrument was published in connection with an advocacy calendar event. The full questionnaire has not been located in public materials.
Tactic (what should have happened): Before sharing, evaluate the instrument design, note the absence of reciprocal framing, qualify the finding with the credibility interval and methodology limitations, and present the data as a preliminary signal requiring contextual verification rather than a settled cultural conclusion.
KPI: Audience engagement with qualified analysis rather than unqualified outrage. Measurable indicator: ratio of shares that include methodology context to shares that present headline numbers without qualification.
In the observed viral activity around this survey, no post met the Factics standard.
And here is the uncomfortable truth about why: the platforms do not reward it. Social media algorithms optimize for engagement, and engagement optimizes for emotional reaction. A qualified data point with methodology caveats generates less outrage, fewer shares, lower reach, and reduced algorithmic amplification. A headline statistic stripped of context generates moral performance, tribal signaling, and viral distribution. The platform rewards the unqualified share and buries the qualified one.
This means the Factics standard is not a best practice that people forgot to follow. It is a governance intervention against the economic architecture of every major social platform. The platforms are built to punish exactly the behavior that data literacy requires: slowing down, qualifying claims, noting limitations, and presenting findings as signals rather than conclusions.
The Connection to AI Governance
This is not a detour from AI governance; it is the foundation of it.
The three-tier framework published across the HAIA ecosystem distinguishes Ethical AI (should this be done?), Responsible AI (who answers when this fails?), and AI Governance (who decides, by what authority, at what checkpoint?). The CBG Ethics Audit 2020-2025 tested whether transparent, measurable frameworks can overcome the bias built into both political discourse and AI systems, and the finding was clear: when analysis is grounded in verifiable checkpoints instead of moral framing, objectivity becomes possible even in environments saturated with ideological and algorithmic distortion.
Checkpoint-Based Governance, the constitutional authority framework published in Governing AI: When Capability Exceeds Control (Puglisi, 2025), establishes that a named human with binding authority at defined checkpoints must evaluate AI outputs before those outputs become consequential. But checkpoint governance assumes the human at the checkpoint is qualified to evaluate what they are reviewing. If the human at the checkpoint cannot distinguish between a qualified finding and an unqualified headline statistic, the checkpoint produces nothing, and the human rubber stamps the output because they lack the analytical training to do anything else.
This is the thesis that the follow-up to this paper will address directly: AI is only as good as the human using it. The evidence from this case study suggests that assumption is wrong. The same professionals who will be asked to serve as human governors in AI checkpoint systems are currently distributing unqualified survey data to mass audiences with no methodology review, no instrument evaluation, and no accountability for the downstream consequences. If they cannot govern their own information consumption, they cannot govern AI output.

The Real Crisis
The governance failure described above is not confined to AI systems. It is the same failure operating across the entire information chain, and the absence of detection mechanisms on the human side makes it worse, not better. Machines hallucinate in contexts where detection mechanisms exist and are improving. Humans hallucinate in contexts where no detection mechanism is applied, where the social incentive structure rewards the hallucination with engagement, and where the professional credibility of the person sharing the unqualified data transfers to the data itself.
The real data literacy crisis is not that AI generates false information. The real crisis is that humans take real information, strip it of everything that makes it interpretable, distribute it to audiences with no analytical training, and call it being data driven.
“AI as a Mirror to Humanity: Do What We Say, Not What We Do” (Puglisi, 2025) documented that AI systems reflect actual human values rather than stated ones. The mirror showed that humans claim to value fairness, accuracy, and evidence-based reasoning while the systems trained on human output reveal bias, suppression, and ideological filtering embedded at the structural level. The same mirror applies to data sharing. Professionals state they value rigorous analysis, but their actual behavior is sharing unqualified statistics for social approval.
The bias runs deeper than individual behavior because it is embedded in the institutions that certify knowledge itself. Henrich, Heine, and Norenzayan (2010) documented that 96% of subjects in top psychology journals came from Western, Educated, Industrialized, Rich, and Democratic populations representing just 12% of the world’s population. The peer review process that certified all of that research as rigorous never caught the population bias because the reviewers shared it.
The three-tier framework makes the final distinction clear. Ethical AI asks whether something should be done. Responsible AI asks who answers when something fails. AI Governance asks who decides, by what authority, at what checkpoint. The viral distribution of unqualified data operates entirely outside all three tiers. No governance checkpoint existed between data consumption and mass distribution. The entire chain from Ipsos survey to LinkedIn comment section operated in a governance vacuum, and the result is exactly what governance vacuums produce: confident action on unqualified information, at scale, with no accountability and no correction mechanism.
Until data literacy becomes a prerequisite for data sharing, not just an academic credential but an operational practice embedded in every information distribution chain, human drift and hallucination will remain the most consequential data problem in public discourse. The mirror is showing us something, and it is not the machines.
That is the crisis. Not that AI hallucinates, but that the institutions in the chain from research design to public discourse have not enforced the standards they claim to uphold, and no one built a checkpoint where the human could intervene before the damage was done.
Frequently Asked Questions
What is human hallucination in data sharing?
Human hallucination is the act of sharing real data stripped of methodology and context as though it were settled fact. Unlike AI hallucination, which fabricates outputs that detection tools can catch, human hallucination produces wrong interpretations that audiences cannot identify because the underlying data is real. No detection mechanism currently exists for this failure.
What is acquiescence bias and how does it affect survey results?
Acquiescence bias inflates survey agreement rates because respondents tend to say “yes” regardless of question content. Hill and Roberts (2023) found it can inflate reported beliefs by up to 50 percentage points. Surveys using unidirectional agree/disagree formats without counterbalanced wording produce systematically distorted results that overstate the prevalence of any presented position.
How does social desirability bias affect generational survey comparisons?
Social desirability bias suppresses honest answers from respondents conditioned to know which views carry consequences. Older generations filter more heavily than Gen Z, which grew up rewarding unfiltered expression online. The resulting generational gap may measure differences in candor rather than actual attitude shifts, and the survey instrument cannot distinguish between the two.
What is the Factics methodology for qualifying data before sharing?
Factics (Facts + Tactics + KPIs) requires every claim to carry verifiable evidence, every fact to pair with an executable action, and every action to produce a measurable outcome. Developed in November 2012 and published in the Digital Factics series, the methodology provides the structural qualification discipline that viral data sharing on social platforms systematically lacks.
How does Checkpoint-Based Governance apply to data literacy?
Checkpoint-Based Governance (CBG) requires a named human with binding authority at defined decision points to evaluate outputs before they become consequential. The viral data sharing chain has no checkpoint between data consumption and mass distribution, which is why unqualified statistics travel unchecked across professional networks with the authority of the poster’s credential attached.
What did the Ipsos IWD 2026 survey actually find about Gen Z men?
The Ipsos survey found 31% of Gen Z men agreed with a unidirectional agree/disagree statement about wives obeying husbands, while 69% did not agree. The viral narrative selected the minority agreement as the story, ignoring the supermajority disagreement and the instrument’s documented susceptibility to acquiescence bias and social desirability bias effects.
References
Atari, M., Xue, M. J., Park, P. S., Blasi, D. E., & Henrich, J. (2023). Which humans? (preprint). Department of Human Evolutionary Biology, Harvard University. PsyArXiv.
Gallup. (2019, October 24). Record-high 56% of U.S. women prefer working to homemaking. Gallup Poll Social Series, Work and Education. https://news.gallup.com/poll/267737/record-high-women-prefer-working-homemaking.aspx
Haugen, F. (2021). SEC disclosures and Congressional testimony on Facebook internal research. Referenced via Wall Street Journal “Facebook Files” series, September-October 2021.
Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33(2-3), 61-83.
Hill, S. J., & Roberts, M. E. (2023). Acquiescence bias inflates estimates of conspiratorial beliefs and political misperceptions. Political Analysis, 31(4), 575-590.
Hill, S. J., & Roberts, M. E. (2025). Acquiescence bias inflates estimates of conspiratorial beliefs and political misperceptions: Corrigendum. Political Analysis, 33(2), 178-180.
Holbrook, A. (2008). Acquiescence response bias. In P. J. Lavrakas (Ed.), Encyclopedia of Survey Research Methods. Sage Publications.
IFS/Wheatley Institute. (2025). In pursuit: Marriage, motherhood, and women’s well-being. Women’s Well-Being Survey conducted by YouGov, March 2025.
Ipsos & Global Institute for Women’s Leadership, King’s College London. (2026). International Women’s Day 2026: Gender equality attitudes across 29 countries. Ipsos Global Advisor. Published March 5, 2026. https://www.ipsos.com/en-uk/almost-third-gen-z-men-globally-agree-wife-should-obey-her-husband
19th News. (2025, September 26). Most men want a return to traditional gender roles, but women aren’t so sure. 19th News/SurveyMonkey poll, national sample of 20,807 U.S. adults, September 8-15, 2025. https://19thnews.org/2025/09/poll-traditional-family-gender-roles/
Puglisi, B. C. (2012). Digital Factics: Twitter. Digital Media Press (MagCloud).
Puglisi, B. C. (2025). Governing AI: When Capability Exceeds Control. ISBN 9798349677687.
Puglisi, B. C. (2025). AI as a Mirror to Humanity: Do What We Say, Not What We Do. basilpuglisi.com.
Puglisi, B. C. (2026). CBG Ethics Audit 2020-2025. basilpuglisi.com.
Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146-1151.
YouGov. (2023, February 28). How leading questions and acquiescence bias can impact survey results. https://yougov.co.uk/politics/articles/45308-how-leading-questions-and-acquiescence-bias-can-im
Basil C. Puglisi holds a Master of Public Administration from Michigan State University. The Factics methodology (Facts + Tactics + KPIs) has been in operational practice since 2012. The HAIA-RECCLIN framework, Checkpoint-Based Governance, and related AI governance specifications are published openly at basilpuglisi.com and github.com/basilpuglisi/HAIA.
#AIassisted
Leave a Reply
You must be logged in to post a comment.