Introduction: from novelty to necessity
Around 2010, artificial intelligence (AI) was largely the domain of research labs and science fiction movies. Today AI assists us when we search online, stream a film, or speak to a digital assistant. By 2035 it is expected to underpin nearly every aspect of daily life. The next decade will see wearable devices that monitor our health in real time, offices guided by virtual assistants, farms managed by algorithms, and courts that rely on machines to draft briefs and check evidence. This essay explores what an ordinary day in 2035 might look like when AI has moved from novelty to necessity. Each section draws on recent market forecasts, scientific studies, and expert opinions to paint a plausible, evidence‑based scenario.
Morning: AI wearables and anticipatory living
In 2035 your day begins not with a jarring alarm but with a wearable device that quietly rouses you when your sleep cycle reaches its lightest phase. Today’s smartwatches and rings already track heart rate and sleep phases; the wearable medical devices market is projected to grow from roughly US$53.73 billion in 2025 to over US$502.85 billion by 2034, reflecting how integral these devices are becoming. Analysts attribute this growth to demand for home healthcare and preventive monitoring; diagnostic devices account for nearly 63 % of current wearable revenues. Another research report estimates that the market could reach US$499 billion by 2035. These projections suggest that most people in 2035 will own at least one wearable with medical sensors.
These devices will not just record data; they will act as personal oracles. According to Imperial College London’s white paper on healthcare innovation, wearable sensors and AI‑driven diagnostics are among the technologies expected to transform healthcare over the next decade. By 2035 your smart ring or glasses will continually analyse your heart rhythm, blood glucose, stress hormones, and even your gut microbiome. If your iron levels drop, your AI companion may suggest ordering a breakfast rich in leafy greens and call a delivery drone to fetch eggs from a local farm. The Guardian’s recent interactive scenario of AI spotting missing groceries hints at this future, but current developments make it plausible. Today’s Whoop strap already uses AI to advise athletes on training loads, and smart toilets under development can analyse waste to detect disease. Health insurers and employers may subsidise such devices because they reduce hospital admissions.
Personal news consumption will be equally anticipatory. Instead of doomscrolling, your AI reads news sources, fact‑checks claims, and summarises multiple perspectives. The Pew Research Center found that 60 % of Americans would feel uncomfortable if their healthcare provider relied on AI; similar scepticism exists toward AI‑curated news, so future systems will need transparent algorithms. AI companies are experimenting with retrieval‑based models that cite their sources and correct earlier misinformation. By 2035 your morning briefing may explain both sides of a contentious issue and adjust its tone based on your mood, using techniques from affective computing to reduce stress.
Work: the shrinking workweek and AI collaborators
Once you begin work, AI becomes your colleague. It reminds you of meetings, transcribes conversations, and suggests follow‑ups. Thomson Reuters’ survey of professionals found that respondents expect AI to save four hours of work each week within a year and 12 hours per week within five years, effectively adding an extra colleague. Lawyers believe the technology could free them to focus on nuanced strategy rather than document review. Bill Gates speculates that AI may lead to a two‑ or three‑day workweek within the next decade; he told a late‑night television audience that AI will “do most things” humans currently perform and predicted that people will work less and have more leisure. The 4‑day week pilot studies reinforce this view: trials involving hundreds of companies show that compressing work into four days maintains or improves productivity while enhancing well‑being. In Iceland and at Microsoft Japan, shorter weeks increased productivity by up to 40 % and reduced energy use. Such experiments are gradually mainstreaming the idea of a shorter workweek.
However, AI does not just shorten the week; it transforms job roles. Automation may eliminate repetitive tasks, leaving humans to provide creativity and empathy. In the United Kingdom, the National Foundation for Educational Research warns that up to three million low‑skilled jobs could disappear by 2035 as automation spreads across trades, machine operations, and administrative roles. Research firm PwC projects that AI could automate 40‑80 % of routine tasks across sectors, though it will simultaneously create demand for new roles in data stewardship, AI ethics, and human–machine collaboration. The future of work will involve constant reskilling; your AI coworker will recommend courses, summarise new legal precedents, or automate spreadsheets. Human managers will cultivate qualities such as curiosity, empathy, and critical thinking that AI cannot replicate.
Universal basic income (UBI) may become part of this new social contract. More than 160 UBI experiments have been conducted worldwide over four decades, with pilots in Europe, North America, and Asia since 2015 showing positive effects on poverty, health, and education. A Newsweek analysis notes that AI could disrupt 45 million American jobs by 2028, and 30 % of U.S. roles may be automated by 2030. Advocates like entrepreneur Andrew Yang argue that a “Freedom Dividend” of US$1,000 per month funded by taxes on technology companies could help citizens adapt. While funding UBI presents fiscal challenges, it may become more politically palatable as automation displaces workers.
Health: from reactive to predictive
Modern medicine is largely reactive: people seek care once symptoms appear. AI promises a shift toward prediction and prevention. The AI in healthcare market is forecast to surge from around US$36.96 billion in 2025 to US$613.81 billion by 2034. Another report projects growth to US$543.83 billion by 2035. These estimates reflect enormous investment in data analytics, imaging, drug discovery, and wearable diagnostics. Governments recognise the potential: the UK health secretary recently declared that AI‑driven diagnostics and personalised treatments will shape the future of healthcare.
By 2035 your wearable may detect subtle changes in gait indicating the onset of Parkinson’s disease or monitor immune‑system markers to warn of infection days before symptoms appear. Smart bandages and skin‑mounted sensors will transmit data to AI systems that compare your metrics with anonymised global databases. When an anomaly emerges, the AI consults guidelines and arranges a telemedicine appointment. Such systems already exist in nascent forms: the U.S. Food and Drug Administration has approved AI that reads electrocardiograms to detect heart abnormalities. Researchers at Pennsylvania State University and partner institutions are developing wearable sensors and robotic feeders to identify respiratory disease in cattle, applying deep learning to analyse behaviour. Similar technology will monitor humans by 2035.
Public acceptance of AI medical advice, however, remains a hurdle. A PLOS Digital Health survey found respondents almost evenly split: 52.9 % preferred a human doctor while 47.1 % chose an AI clinic; when primary care physicians endorsed AI, acceptance rose, and disease severity had little effect. A Pew Research survey similarly reported that 60 % of U.S. adults would feel uncomfortable if their provider relied on AI, and only 38 % believed AI would improve health outcomes. These findings underline the importance of trust and transparency. AI systems must explain their reasoning, display uncertainty, and allow humans to overrule them.
By 2035 hospitals may resemble control centres rather than wards, with AI triaging cases and directing robotic assistants. Clinicians will increasingly supervise algorithms and provide human connection rather than routine tasks. Yet caution is warranted: generative models occasionally “hallucinate” by producing plausible but false information, as seen when an AI generated fabricated legal citations in the Mata v. Avianca case. Preventing hallucinations and bias will be critical to avoid misdiagnosis.
Agriculture and the environment: precision, productivity, and vertical farms
Your breakfast of scrambled eggs and spinach may come from fields managed by AI. In agriculture, AI‑enabled robots, drones, and sensors will optimise water use, monitor plant health and reduce chemical inputs. Future Market Insights estimates that the AI in agriculture market will grow from US$5.9 billion in 2025 to US$61.3 billion by 2035, reflecting a 26.3 % compound annual growth rate. The driver needs to increase yields while minimising environmental impact. John Deere’s See & Spray system uses computer vision to distinguish crops from weeds and apply herbicide only where necessary, achieving 77 % average herbicide savings. An independent analysis of its 2025 season across five million acres reported that farmers reduced herbicide use by nearly 50 %, saved 31 million gallons of herbicide mix, and improved yields by up to 4.8 bushels per acre.
AI is also transforming animal husbandry. On some UK dairy farms, overhead scanners equipped with 3D cameras monitor cows’ weight, body condition, and gait as they leave the milking parlour. The system can detect signs of lameness earlier than farmers and provides 95 % accuracy in weight measurement, matching human experts in evaluating body condition. The University of Bristol leads a project using AI to analyse cows’ social interactions to spot early signs of mastitis and lameness. When applied to human health, such algorithms could detect subtle changes in social behaviour indicating mental illness.
The push toward environmental sustainability will also bring vertical farming into cities. Vertical farms stack crops indoors under artificial light, allowing year‑round production with minimal pesticides. The global vertical farming market was valued at around US$9.66 billion in 2025 and is expected to reach US$74.35 billion by 2035, expanding at 22.6 % annually. But vertical farms currently produce less than 1 % of the world’s vegetables because they are 4–10 times more energy‑intensive than greenhouses, and produce costs can be US$3.07 per pound versus US$0.65 for conventional farms. The success of urban farming will depend on cheaper LEDs, renewable energy, and automation. A 2025 scenario analysis suggests that if technology cuts labour costs by 50 % and energy costs through more efficient lights, vertical farms could supply up to 10 % of the global vegetable supply by 2035. Meanwhile microgreens—a common vertical farm crop—are projected to grow from a US$1.8 billion market in 2022 to US$2.6 billion by 2031, indicating growing consumer demand for specialty produce.
Law and justice: faster, fairer or flawed?
AI is already drafting legal documents, summarising discovery, and forecasting case outcomes. By 2035 courts may rely heavily on AI to manage caseloads. According to Thomson Reuters, professionals expect AI to free 12 hours per week of their time within five years. Law firms are exploring AI to screen evidence, research case law, and even predict how judges might rule. However, early experiments reveal pitfalls. A 2024 American Bar Association (ABA) report warns that deepfakes and AI‑generated disinformation are undermining evidence: judges face forged videos and voice clones that mimic witnesses or judges, forcing courts to develop new authentication protocols. The ABA recommends that lawyers and judges receive training to spot manipulated media and that AI tools be certified for reliability.
Concerns also exist about the impartiality of algorithmic sentencing. If AI models are trained on biased historical data, they risk perpetuating injustices. In 2035, courts may require “algorithmic explainability statements” much like financial prospectuses, detailing how a system makes recommendations. Human judges will still preside to ensure due process, but they may rely on AI for risk assessments and legal research. Legal AI could compress months of work into hours, lowering costs and improving access to justice for people who cannot afford lawyers. Yet the risk that an AI might hallucinate a precedent or overlook nuance underscores the need for cautious adoption and regulatory oversight.
Energy and the environment: the hidden cost of AI
AI’s benefits come with significant energy demands. A World Economic Forum analysis warns that training a single large language model consumes enormous electricity—training GPT‑3 consumed about 1,300 megawatt‑hours, equivalent to the annual power use of 130 U.S. households, and GPT‑4 required around 50 times more. Data centres supporting AI currently consume more than 1 % of global electricity; the International Energy Agency projects that consumption could double to 945 terawatt‑hours by 2030 and increase further to 1,193 TWh by 2035, with AI accounting for 35–50 % of data‑centre power use. For context, 1,000 TWh roughly equals the annual electricity demand of Japan. If not offset by renewable energy, AI expansion could exacerbate climate change.
Yet AI also offers tools for mitigation. In manufacturing and logistics, AI optimises supply chains to reduce waste and emissions. Quantum computing, still nascent, may improve battery materials and carbon capture. Forecasting models help power grids integrate wind and solar by predicting the weather with high resolution. The challenge is ensuring that AI’s environmental benefits outweigh its carbon footprint.
AI timelines and existential questions
A deeper question underlies our 2035 vision: will artificial general intelligence (AGI)—machines that match or exceed human cognitive versatility—emerge by then? An analysis of 8,590 predictions shows that most AI experts consider AGI inevitable, and recent surveys predict it around 2040, though earlier predictions placed it around 2060. Entrepreneurs are more bullish, expecting AGI around 2030. While there is no consensus on how to achieve AGI, the average of surveys indicates a 50 % probability between 2040 and 2061. This timeline suggests that by 2035 we may still be operating with “narrow” AI that excels in specific tasks rather than truly general intelligence. However, incremental progress in large‑scale models is closing gaps quickly. Should AGI arrive, our social and political institutions must adapt rapidly to control superintelligent systems.
Ethics, inequality, and human connection
AI’s integration into daily life will not automatically lead to utopia. It raises ethical questions about privacy, surveillance, inequality, and mental health. Wearables may monitor intimate biological and behavioural data; misusing this data could enable discrimination. Legal scholars warn that the same technology that identifies disease might also be used by insurers to deny coverage or by employers to assess productivity. MIT Sloan’s review of generative AI highlights that models sometimes generate biased or fabricated content (hallucinations) because they reflect the training data. To mitigate these risks, developers are investing in techniques such as fact‑checking modules and retrieval‑augmented generation, and regulators are drafting AI transparency laws.
The distribution of AI’s benefits may also be unequal. Low‑income communities lacking broadband or digital literacy could be excluded from AI‑driven healthcare and education. Jobs requiring physical dexterity and social intelligence may be insulated from automation, while routine clerical work disappears. The NFER report that up to three million low‑skilled UK jobs could vanish by 2035 underscores the risk of widening economic divides. If UBI or reskilling programmes are not widely adopted, inequality may worsen. On the other hand, AI could empower disabled and elderly individuals by providing companion robots, sign‑language translators, and customisable user interfaces.
AI might also change our cognitive abilities. Over‑reliance on recommendation systems and digital assistants could diminish memory and attention spans. There is some evidence that using GPS weakens spatial memory; similar concerns apply to AI. Designers must therefore build systems that support human agency rather than replace it. Some futurists warn of “cognitive decline” if machines make every decision, while others argue that AI will free us to focus on creativity and relationships. The reality will likely involve trade‑offs.
The positive vision: human‑centric AI by 2035
Despite the challenges, AI holds promise for a future that is healthier, greener, and more humane. By 2035 your home will anticipate your needs and reduce waste. Your workweek will be shorter, leaving time for family, art, and community. Healthcare will focus on preventing illness rather than treating it, with AI assisting doctors instead of replacing them. Farms will become more sustainable, producing fresh food in urban centres while using fewer chemicals. Courts will deliver justice more quickly and equitably by automating routine tasks. New industries will emerge around AI ethics, education, maintenance, and regulation.
To realise this vision, society must prioritise human values. Governments and organisations need to invest in digital literacy, equitable access, privacy protection, and renewable energy. Regulatory frameworks should require explainability and accountability for AI systems. Transparent and inclusive governance can ensure that AI benefits the many rather than the few. As AI continues to advance toward general intelligence, the next decade will be pivotal in shaping its role in our lives. By engaging thoughtfully with the technology today, we can steer the world toward an AI‑enhanced 2035 that amplifies human potential rather than diminishes it.
References
AIMultiple. (2025). When will AGI happen? 8,590 expert predictions analyzed. https://research.aimultiple.com/artificial-general-intelligence-singularity-timing/
American Bar Association. (2024). Deepfakes, generative AI, and the risks to legal evidence. https://www.americanbar.org/groups/science_technology/publications/scitech_lawyer/2024/fall/deepfakes-generative-ai-risks-legal-evidence/
Carbon Brief. (2024). The rising energy footprint of AI and data centres. https://www.carbonbrief.org/analysis-ai-data-centres-energy-demand/
Darzi, A., Butterworth, G., & Howitt, P. (2025). The future state of health and healthcare in 2035. Institute of Global Health Innovation, Imperial College London. https://www.imperial.ac.uk/news/265513/vision-healthcare-2035-from-wearables-robots/
Gates, B. (2024). AI, work, and the future of productivity. Gates Notes. https://www.gatesnotes.com/AI-and-the-future-of-work
IBM. (2024). What are AI hallucinations? https://www.ibm.com/topics/ai-hallucinations
International Energy Agency. (2024). Electricity 2024: Data centres and AI. https://www.iea.org/reports/electricity-2024
Keynes, J. M. (1930). Economic possibilities for our grandchildren. In Essays in Persuasion. Macmillan.
MIT Sloan Teaching & Learning Technologies. (2024). Addressing AI hallucinations and bias. https://mitsloanedtech.mit.edu/ai-hallucinations-bias/
National Foundation for Educational Research. (2025). AI and automation: UK employment outlook to 2035. https://www.nfer.ac.uk
Pew Research Center. (2024). How Americans view artificial intelligence and its impact on daily life. https://www.pewresearch.org
PwC. (2024). The value of AI: Productivity and workforce transformation. https://www.pwc.com/gx/en/issues/data-and-analytics/artificial-intelligence.html
Thomson Reuters. (2024). AI is set to save professionals up to 12 hours per week by 2029. https://www.thomsonreuters.com/en/press-releases/2024/july/ai-set-to-save-professionals-12-hours-per-week-by-2029.html
U.K. Department for Environment, Food & Rural Affairs. (2024). AI and precision agriculture in the UK. https://www.gov.uk/government/publications
World Economic Forum. (2024). Artificial intelligence and the future of jobs. https://www.weforum.org/reports/the-future-of-jobs-report-2024