top of page

Before the Algorithm: How Teachers Built the Digital Classroom

The history of educational science software is, first and foremost, a history of teachers. Since the 1990s, teachers have been the ones building the tools that still equip thousands of classrooms today, coding in their own time, on evenings and weekends, for no pay. Over time, universities and foundations extended this offering with Scratch, PhET, Phyphox, and others: free software, often open source, adapted to real classroom needs because it was built by people who work there.


That tradition has been disrupted by the rise of EdTech companies and large technology groups, a disruption accelerated by Covid, which forced open the doors of institutions that had long resisted wholesale digitalisation. These new solutions offer polished interfaces and innovative features. But behind the promise of a modern digital education lurks the risk of business models that turn schools into captive clients, student data into a commodity, and teachers into unwitting salespeople.


Artificial intelligence is reshaping the landscape once again. It is democratising software creation: producing a quality educational tool no longer requires a career's worth of volunteer development, and the tradition of teacher-creators may be on the verge of a revival. Yet AI is also positioning itself as the ultimate educational tool, one that adapts to every student, answers every question, at any hour, with no intermediary. A single point of entry to knowledge, over which the teacher has no control: not over the narrative, not over the sources, not over how knowledge is framed and transmitted.


Marc Andreessen famously said that software is eating the world. The question now is whether artificial intelligence will eat educational software, or reinvent it.



When Calculation Entered the Classroom: Computers, Calculators and First Languages


The introduction of computing into schools did not begin with the web. In the United Kingdom, the story starts in earnest in 1981 with the BBC Micro. Built by Acorn Computers and backed by the BBC's Computer Literacy Project, the machine was designed explicitly for schools: robust, affordable, and accompanied by television programmes, lesson materials, and a growing library of educational software. By the mid-1980s, 80% of British schools had one. BBC Basic, the language bundled with the machine, became the first programming environment that a whole generation of pupils encountered.


Across the Atlantic, the United States had its own parallel story. The Apple II, championed by MECC (Minnesota Educational Computing Consortium), a state-funded organisation, was distributed to schools across the country loaded with educational titles. From Number Munchers to The Oregon Trail, these programs gave millions of American children their first encounter with computing as a learning tool. The Oregon Trail itself was originally written in 1971 by Don Rawitsch, a student teacher, on a school teletype machine in Minneapolis. He printed out the code, deleted it from the network, and took the printout home. Three years later MECC hired him, and the game was reborn, eventually reaching a third of all US school districts. It remains perhaps the clearest illustration of the teacher-creator model: a tool conceived in a classroom, for a classroom, that became the most influential educational game ever made.


In both countries, the slide rule vanished from school bags in the 1970s, replaced by scientific pocket calculators from Hewlett-Packard, Casio, and Texas Instruments. No ministerial decree drove this change. It simply happened, because the tool was faster, more reliable, and falling in price. By the 1980s and 1990s, graphing calculators had extended the range further: the TI-82, TI-83, and TI-84 became standard in the secondary science classroom, capable of plotting functions, running regressions, and solving equations numerically. They also included programming environments, TI-Basic and Casio Basic, which gave many students their first real experience of structured code. These environments were closed and proprietary, tied entirely to their manufacturers.


The spreadsheet arrived in the same period as the first stable scientific use of computers in schools. Excel, Lotus 1-2-3, and later OpenOffice Calc allowed students to enter experimental data, plot graphs, and run linear regressions. These were general-purpose business tools repurposed for education. In retrospect, they constitute the first shared software layer across all scientific disciplines on a computer: a universal language of tabulated data, available everywhere, requiring no subject specialisation, and one that remains present in virtually every school thirty years later.



The Teacher-Creator Generation: Free Tools Born in the Classroom (1990s to 2010s)


The most distinctive chapter in the history of educational science software in the English-speaking world mirrors what happened in France: the discipline produced its own tools. The clearest Anglo-Saxon expression of this is Vernier Science Education. Founded in 1981 in Portland, Oregon, by David Vernier, a high school physics teacher, the company grew directly out of his frustration at the absence of affordable, usable data-acquisition tools for school laboratories. His first programs were scientific simulations for the Apple II, written for his own physics classes. By 1982 he had built Graphical Analysis, which allowed students to enter data and display it as a graph. What started in a spare bedroom became, over two decades, the de facto standard for science data logging in US and UK schools: Logger Pro.


Compatible with dozens of sensors (motion detectors, photogates, pH probes, gas pressure sensors), Logger Pro occupied exactly the same niche in the Anglo-Saxon world that Régressi occupied in France: the data-handling backbone of the secondary science laboratory. The origin story matters. David Vernier was a teacher who built what he needed because nothing suitable existed. Unlike his French counterparts, he commercialised the result, but the founding impulse was identical. The same structural fragility also applies: when Vernier retired Logger Pro in 2024 and migrated to a suite of cloud-based apps, thousands of schools had to adapt their entire laboratory workflow overnight.


The tradition of teacher-written software flourished across the whole English-speaking world. The BBC's Microelectronics in Education Programme explicitly funded teachers to write curriculum software in the early 1980s, producing a wave of discipline-specific programs distributed through local education authorities. In the US, the MECC library grew to hundreds of titles, many of them converted from programs that teachers had originally written for mainframe time-sharing systems.


These tools, both the commercial and the volunteer-made, shared a structural importance: they responded to real classroom needs because they came from people who worked there. They were also, for the same reason, structurally fragile. When Flash disappeared from browsers in 2020, hundreds of interactive science resources built over years by individual teachers became unusable overnight. The lesson applied equally in Manchester, Minneapolis, and Marseille.


Other free and open-source tools completed the landscape: Avogadro and RasMol for molecular visualisation, Tracker (developed in the US) for video analysis of motion, Scilab as a free alternative to MATLAB. Many originated in university research and spread virally through teacher networks, shared on school and department websites and at science teacher conferences. This horizontal, teacher-to-teacher diffusion, with no institutional mandate and no marketing budget, is the defining feature of the era.



Interactive Simulations: Science Without the Lab Bench (2000s to 2015)


The spread of broadband internet opened a new chapter: simulations accessible online. The idea was not new. Java applets had been circulating since the late 1990s on individual teachers' sites. But it took on a different scale with the emergence of structured, institutionally-backed projects.


PhET Interactive Simulations, founded in 2002 by physicist Carl Wieman (Nobel laureate) at the University of Colorado Boulder, is the most significant project at a global level. It offers hundreds of simulations in physics, chemistry, biology, and mathematics, free and Creative Commons licensed, translated into around forty languages. PhET simulations let students manipulate variables inaccessible in a school laboratory: adjust the mass of a planet, change an electron's charge, observe the energy levels of an atom. Today PhET is cited in curriculum guidance from the UK's Department for Education to the US National Science Foundation, and its usage figures run into the hundreds of millions of sessions annually.


In mathematics, GeoGebra occupies an equivalent position. Developed from 2001 by Markus Hohenwarter at the University of Salzburg, open source and free, it brings together dynamic geometry, algebra, calculus, and statistics in a single interface. Translated into more than fifty languages, it is now standard in secondary schools across the US, UK, Australia, and beyond. Its spread followed exactly the same path as PhET: viral, horizontal, driven by teachers recommending it to one another with no institutional push.


The European initiative Go-Lab, funded by the European Commission from 2012 to 2016, attempted to federate this ecosystem at a continental scale. Rather than creating its own simulations, Go-Lab aggregated virtual laboratories, remote labs giving real access to equipment at distance (including at CERN and ESA), and experimental datasets within a single portal. Teachers could assemble structured inquiry learning sequences and share them with students or colleagues. It remains the most ambitious attempt at institutional coordination in this domain, prefiguring what a collectively-designed digital educational infrastructure could look like.


The era also produced significant contributions from individual teachers: sites like The Physics Classroom (US) and PhysicsNet (UK) built up libraries of interactive resources. Most of those resources were written in Flash, and most expired silently in December 2020 when browser support ended. This was the first major technical fragmentation event in educational software history: dependence on a proprietary technology created synchronised obsolescence across years of collective development.



From BASIC to Robots: Learning to Program


Programming in schools predates Python by several decades. In the UK, BBC Basic on the BBC Micro gave a generation of pupils their first encounter with the idea that a machine could follow instructions written by a human. In the US, Logo, developed at MIT in the late 1960s and popularised through the Logo turtle (a physical or screen-based robot that children directed with simple commands), became the first widely-used programming language in primary schools on both sides of the Atlantic throughout the 1980s. Seymour Papert's vision of learning by building things shaped an entire generation of thinking about computing education.


Programming then largely vanished from curricula for nearly two decades. Word processing and office software filled the space instead. Its return came through two parallel routes in the 2000s and 2010s.


The first was block-based programming. Scratch, developed by MIT Media Lab from 2003, removes the syntax barrier entirely and focuses on logic, using visual instructions snapped together like puzzle pieces. Translated into over sixty languages and used from primary school upward in dozens of countries, it gave many students their first genuine experience of building something with code.


The second was educational robotics. Lego Mindstorms, introduced in 1998, became a fixture in US and UK science and technology classrooms. VEX Robotics built a full competition ecosystem around programmable robots in secondary schools. Sphero brought robotics into primary classrooms. The shared premise: anchor logic in physical movement, and make the consequences of code visible in the real world.


Microcontrollers gave these approaches their full experimental dimension. Arduino (open source, Italy, 2005) took hold in technology and engineering classrooms. The BBC micro:bit, launched in 2016 and distributed free to every Year 7 pupil in the UK (one million devices, backed by a consortium including the BBC and Microsoft), brought Python and block programming into the hands of eleven-year-olds at scale. It is perhaps the most significant state-backed hardware initiative in educational computing since the BBC Micro itself. Raspberry Pi, born as a charitable venture at the University of Cambridge, became simultaneously a cheap Linux computer for ambitious school projects and the institutional backbone of UK computing education reform: the Raspberry Pi Foundation led the consortium that secured £78 million in government funding to establish the National Centre for Computing Education following the 2014 curriculum change.


That 2014 reform deserves attention in its own right. England replaced mandatory ICT (essentially office skills) with mandatory Computing for all school-age pupils, encompassing computer science, information technology, and digital literacy. It was the most ambitious curriculum reform of its kind in the English-speaking world, explicitly treating programming and computational thinking as fundamental literacy rather than a vocational option.


In the US, Code.org, a non-profit founded in 2013 and backed by donations from Google, Amazon, and Microsoft, pursued the same goal through a different route. Through its Hour of Code campaign and free curriculum resources, it brought computer science to tens of millions of students, achieving at scale in the US what institutional reform achieved in England, without a government mandate.


Python's gradual arrival in UK and US science curricula followed a similar logic to France's 2019 reform. Python is not an educational tool. It is the language of researchers, engineers, and data scientists. To introduce it in secondary school is to align school practice with how contemporary science actually works, in which code has become as fundamental a skill as algebra.



Scientific Instrumentation: From Proprietary Systems to Open Sensors


For decades, equipping a secondary school laboratory for computer-assisted experimentation meant a significant investment. In the English-speaking world, Vernier was the dominant player. Its LabQuest interfaces, data loggers, and sensor range (pH, pressure, force, motion, dissolved oxygen) became the standard across US and UK schools. Vernier's ecosystem was built on a closed-system logic: its sensors spoke to its interfaces, which fed its software. A school that invested in Vernier became its captive. Switching brands meant replacing everything. A full probeware setup could run to several thousand dollars, widening the gap between well-resourced schools and those operating with ageing equipment.


The disruption came from outside the education sector entirely. The proliferation of low-cost microcontrollers, Arduino, micro:bit, ESP32, brought with it an ecosystem of open, compatible sensors manufactured at scale for home automation, IoT, and industrial monitoring. A CO2 sensor, a pH probe, or a temperature sensor connected to an Arduino costs a few dollars where a proprietary probeware equivalent can exceed a hundred. Accuracy is sometimes lower, but often sufficient for secondary school needs. More importantly, the data is accessible in any environment: Python, a spreadsheet, or a mobile app.


The monopoly of traditional probeware manufacturers on school science data acquisition is considerably less solid than it was ten years ago. The open question is standardisation. For the open ecosystem to fulfil its promise, software, communication protocols, and data formats need to converge. Without that, proprietary fragmentation is simply replaced by open-source fragmentation, which is just as difficult for a non-specialist teacher to navigate.



The Smartphone as a Scientific Instrument


A parallel shift, often underestimated, has quietly transformed the experimental toolkit available in classrooms: the smartphone. A device already in students' pockets contains, depending on the model, an accelerometer, gyroscope, magnetometer, barometer, GPS, microphone, camera, and light sensor. These are genuine physical measurement instruments, exploitable as soon as appropriate software exposes their data.


Phyphox (Physical Phone Experiments), developed by RWTH Aachen University in Germany and published in 2016, was the first tool to establish itself widely in secondary schools internationally. Open source and free, it provides access to all of the phone's sensors, displays readings in real time, and allows data export. Its remote control function, where a second device triggers data acquisition over the local network, is particularly useful in collective lab sessions: the teacher can start the measurement from their desk while students handle the equipment. Classic experiments become feasible without specialist kit: free fall measured by accelerometer, pendule, Doppler effect via the microphone, measurement of Earth's magnetic field. Ulysse Delabre at the University of Bordeaux published a reference guide documenting achievable experiments and characterising sensor limits, enabling a rigorous approach to measurement uncertainty that stands alongside any probeware-based methodology.


FizziQ, launched in 2020 in partnership with the La main à la pâte Foundation, takes the same logic further with a more deliberately pedagogical design. Free, requiring no account, and usable offline, it combines sensor-based data acquisition with a structured experiment notebook, a community-built protocol library, and image analysis tools. Activities are shared between students and teachers by QR code. Through FizziQ Connect, it extends to microcontrollers (Arduino, micro:bit, ESP32) for measurements the smartphone cannot natively provide: CO2, pH, temperature probes, radiation.


What these tools share is structurally important: they are free, built by academic institutions or public-interest foundations, and they run on hardware students already own. They represent a continuation of the teacher-creator ethic of the 1990s, updated for a universal platform. Their main limitation is sensor variability across smartphone models, which complicates comparison of results across a class, alongside the unresolved question of mobile phone use in school settings that most institutions have yet to properly address.



The Rise of EdTech: Between Promise and Reality


The 2010s saw a new kind of actor emerge in education: the EdTech company. Backed by private capital, with interfaces built for the smartphone generation, and carrying ambitious claims about personalised learning, EdTech promised to transform education the way digital technology had transformed other industries. Covid in 2020 acted as a massive accelerator. In a matter of weeks, platforms that had been niche products reached millions of users.


In science education, the most significant contribution has been virtual laboratories. Labster, founded in Denmark in 2011, offers more than 300 immersive simulations in biology, chemistry, physics, and genetics, drawing on video game techniques (3D environments, narrative, gamification) within a pedagogically rigorous framework. Studies show meaningful reductions in dropout rates in STEM courses where it is used. With over 6 million users in 100 countries and partnerships with more than 3,000 institutions, including the entire California Community College network, Labster represents an EdTech model that has delivered on its pedagogical promises. The cost is a substantial institutional subscription.


After a period of intense optimism between 2020 and 2022, when global EdTech investment exceeded $20 billion annually, the sector experienced a sharp correction. The Byju's case became emblematic of the excesses of that period. The Indian startup, valued at $22 billion in 2022, collapsed under its debts and a business model built on aggressive sales tactics directed at families. Khan Academy, a non-profit sustained by philanthropic grants, occupies a different position entirely: its model is closer to an academic foundation than a commercial EdTech, and it remains very difficult to replicate without substantial endowment.


In the US and UK, the structural realities are unforgiving. Purchasing decisions are often school-by-school or district-by-district. Budgets vary enormously. Procurement cycles are long. What this period produced is less a pedagogical revolution than a partial professionalisation of the digital toolkit available in schools: a genuine contribution, constrained by the realities of a market that frequently behaves like a non-market.



Covid: Accelerator and Revealer


On the evening of 23 March 2020, as lockdown was announced across the UK, millions of teachers found themselves having to teach science without a laboratory, without a bench, without students in front of them. In a matter of days, everything that the previous decade had failed to gradually normalise was installed by necessity: video conferencing, virtual labs, learning management systems, online resources.


For platforms already in place, it was an explosion of use. PhET recorded tens of millions of additional sessions; Labster signed contracts in weeks that would normally have taken years to negotiate. But Zoom and Teams simultaneously laid bare the limits of science at a distance. Students cannot manipulate, smell, or personally observe what happens in a test tube or under a microscope. The research on this is clear: reduced concentration, lower engagement, and a fundamental inability to reproduce the hands-on, sensory dimension of scientific work in a remote setting.


Once lockdowns ended, schools returned largely to in-person teaching, and most tools adopted in the emergency were gradually set aside. Retaining users proved far harder than acquiring them.


What Covid permanently changed was the legitimacy of digital tools in schools. Teachers who had never touched an educational technology tool adopted one by necessity, and many kept using it after returning to the classroom. The major platforms, Google Workspace for Education and Microsoft Teams for Education, used this period to establish themselves as default infrastructure across thousands of institutions, with a speed that no institutional policy process would have permitted. The Covid episode acted, for digital education, as a historical accelerator: with everything that implies in terms of solid gains and shortcuts whose fragility only became apparent later.



The Role of Public Institutions in Digital Tool Development


Public authorities have not been passive spectators of educational technology. They have been, in their own way, its discreet architects.


In the UK, the 1981 BBC Computer Literacy Project was a state-backed initiative to put computing capability into schools and homes simultaneously, producing a machine, a curriculum, and a broadcast education programme as a single integrated package. The result (80% of British schools equipped with a BBC Micro within five years) remains one of the most effective instances of coordinated public investment in educational technology anywhere. Four decades later, the 2014 Computing curriculum reform and the £78 million National Centre for Computing Education represented a second major public commitment: this time not to hardware, but to teacher training, curriculum resources, and community infrastructure.


In the US, the tradition runs differently. Through state-level consortia like MECC, federal research funding backing PhET and Go-Lab, and a robust non-profit sector including Khan Academy and Code.org, the US has achieved comparable public-interest outcomes through structures that mix state, philanthropic, and private capital. Code.org alone has made computer science accessible to over 70 million students, a public good produced through private philanthropy rather than government mandate.


The most structurally valuable public contributions are those that produce digital commons: infrastructure, interoperability, and shared resources that no private actor has the interest or capability to build. In the UK, the Raspberry Pi Foundation's freely available curriculum materials, the Computing At School network, and the micro:bit's open hardware ecosystem exemplify this logic. What is still missing, on both sides of the Atlantic, is explicit coordination between these layers. A teacher today navigates between a school LMS, a district licensing agreement, a free university resource, and a tech company-funded tool, without perceiving any overall architecture. As AI multiplies the number of actors capable of producing educational content, that coordination becomes more urgent, not less.



The Ultimate Software: When the Machine Adapts to Every Student


The arrival of large language models in schools from 2022 represents a rupture of a different nature from all previous ones. Until now, every educational tool was designed for a specific subject, level, or type of activity. Generative AI has no domain. It answers any question, in any discipline, at any hour, in the language and register of the user. It adapts to the level without being told, rephrases until it is understood, does not judge, and does not tire. For a student stuck on a mechanics problem at 11pm, it functions as a tutor who is always available.


Usage is growing fast, diffuse, and largely invisible to institutions. Students use ChatGPT, Claude, or Gemini to paraphrase a lesson, explain a concept, or check a line of reasoning: sometimes to understand, sometimes to avoid thinking. More specialised tools are emerging. Khanmigo, Khan Academy's AI tutor, guides students through questions rather than answers. In the UK, Ofsted and the Department for Education have begun grappling with what AI means for assessment integrity. In the US, most major school districts swung from banning these tools to cautiously embracing them within months of their introduction.


What makes this rupture fundamentally different from previous ones is the question of control. When a student uses Vernier's Graphical Analysis or a PhET simulation, the teacher knows what the tool does, what it shows, and according to what logic. When a student uses a large language model, they enter a system over which the teacher has no grip: not over the narrative, not over the sources invoked, not over how knowledge is shaped and transmitted. The model can simplify where rigour is needed, reassure where effort should be required, and produce a fluent, convincing explanation that is simply wrong.


The answer is not prohibition, which would be as futile as trying to ban the internet. The real question is what place these tools occupy in the educational relationship, and what role the teacher can still play when the student has direct, permanent, frictionless access to a machine that answers everything. Generative AI may be the first digital tool that does not come to complement the educational apparatus. It short-circuits it.



AI as a Production Tool: Towards a Renaissance of Creator-Teachers?


Artificial intelligence does not only change how students access knowledge. It changes how educational software itself is produced, and this shift may be, in the longer term, the most structurally significant of all.


For decades, creating good educational software required a rare combination: a teacher's subject expertise, a developer's technical mastery, and time, usually evenings and school holidays devoted to a project with no payment. Logger Pro required a career. PhET required years of development and a Nobel Prize's worth of institutional credibility to secure university funding. This cost in time, skills, and personal sacrifice limited the creator community to a small number of exceptionally motivated individuals.


AI coding tools change this equation radically. A physics teacher with a clear pedagogical vision but limited programming skills can today produce a functional prototype in a few weeks. The resulting tools will likely be better across several dimensions: modern HTML5 interfaces, precise alignment with curriculum requirements, rapid updating when programmes change, cross-platform by default.


This democratisation carries a real promise: a revival of the teacher-creator model at a scale and speed incomparable to the 1990s. It also carries a real risk. If everyone can produce their own tool, the fragmentation already documented across educational software could grow from a few dozen environments to several hundred, without interoperability, and without maintenance guaranteed beyond the creator's initial enthusiasm.


There is also a deeper question. When code is produced largely by a machine trained on the collective knowledge of humanity, the value of the software no longer resides in the lines of code themselves. It resides in the pedagogical vision that guided its creation. This shift strengthens the case for open source: code that a machine helped write from shared human knowledge belongs, in some meaningful sense, to the community that generated that knowledge.



Conclusion


The history of educational science software is, at its core, a history of trust: trust placed in teachers to identify what the classroom actually needs, trust placed in institutions to protect what cannot be left to the market, trust placed in students to learn with tools that meet them where they are. The three decades covered here trace a coherent arc. From isolated creators to academic foundations, from Flash simulations to Bluetooth sensors, each step enlarged what was possible in a classroom without ever settling, once and for all, the question of what was desirable.


Artificial intelligence reopens that question on two fronts simultaneously. On one side, it presents itself as the ultimate learning tool: available at any hour, capable of explaining, correcting, generating exercises or diagrams on demand. On the other, the same capability can be directed toward the creation of new tools that will live their own life in the service of education. A physics teacher with a clear idea and a few weeks of work can today build what Jean-Michel Millet or David Vernier could only have assembled over a career. That potential deserves to be actively encouraged: through funding mechanisms suited to individual projects, through spaces where these creations can be shared, improved, and collectively maintained, and through institutional recognition of the time it takes.


The diversity of specialised tools, created by individuals, foundations, or EdTech companies, is not an anachronism in the face of generalist AI. It is its necessary counterweight. Software designed to track a trajectory, simulate a titration, or visualise a magnetic field offers what no all-purpose system can guarantee: a deliberate pedagogical framework, a considered progression, an anchor in the curriculum.


The school has always found a way to domesticate its tools: the slide rule, the spreadsheet, the internet. Whether it will do the same with AI is a choice, not an inevitability. And it remains, as it has always been, the teachers' choice to make.





 
 
 

Comments


bottom of page