The Summit of the Future:
3 Things Every Law Student Should Know

By Bethany Ho, LAW ‘27

Global inequalities have deepened over the last few decades as environmental and social factors have acutely favored the already wealthy and powerful. This trend cuts against global commitments that states made in the 2015 Sustainable Development Goals (SDGs), which were centered on overcoming inequality. As part of an effort to better tackle these challenges, the Summit of the Future (SOTF) will be held by the United Nations in New York City starting this week and continuing into the weekend.

Boosting cooperation across the globe, the Summit of the Future is a conference meant to improve the systems and structures necessary to accelerate efforts to meet existing international commitments such as the SDGs and to take concrete steps to respond to emerging challenges and opportunities. As Secretary-General Antonio Guterres notes, “Unchecked inequality is undermining social cohesion, creating fragilities that affect us all. Technology is moving ahead without guard rails to protect us from its unforeseen consequences.”

To help create these guardrails, the Secretary-General has created high-level initiatives including the DPI Safeguards Initiative and the High-Level Advisory Body on Artificial Intelligence. Both initiatives are tied to the state-led drafting of the Global Digital Compact to be adopted at the Summit.

By committing to the Summit of the Future, UN leadership hopes to establish the greater level of international cooperation necessary to achieve a prosperous future and reinvigorate trust at a time when many people and governments are questioning the relevance of the UN as a global governing body. As law students and global citizens, we are stakeholders in these processes. Here are the top three things you should know about the Summit of the Future, and how the decisions made there will impact your daily life.

1. Reinvigorating the Sustainable Development Goals

The Sustainable Development Goals (SDGs) were adopted in 2015 by UN Member States, refining an image for “peace and prosperity for people and the planet, now and into the future.” Seventeen goals were negotiated with the intent of meeting the basic needs of those around the world, ensuring equitable access to resources, and fostering innovation, and sustainability.

However, lax implementation and worldwide crises, such as the COVID-19 pandemic, rising authoritarianism and fractured geopolitics, wars, and natural disasters, have led to a global failure to meet the current SDGs, which are set to expire in 2030. Over 30% of the 140 targets set for the SDGs show disastrous deficits in progress toward the measurement indicators that states set for themselves. Either no change has been accomplished or states are backsliding in their commitments.

Despite these drawbacks, worldwide public and private infrastructures for electricity, computing power, and information and communication technologies have continued to develop and expand. States and the UN say these shifts in the innovation economy offer new hope of achieving significant change between the SOTF and 2030, and beyond. While rapidly developing technology has given rise to an influx of new areas that require regulation, it can also potentially be used to accelerate progress toward the SDG targets – at least that is the dominant narrative underpinning the SOTF in September and the GDC negotiation process. And there is some reason to be hopeful. For example, new data management tools may allow authorities and organizations to maximize their efficiency in administering expanded social protection systems(PDF) for citizens and residents. Both increased opportunities through technology and more complete and sophisticated statistical knowledge may help to realize the ultimate goals of equity and equitably enjoyed prosperity. That is the bet on the table at the SOTF.

With these technological opportunities come risks to fundamental rights like privacy, equality, democracy, and access to justice, and thus the need for guardrails and regulation. In preparation for the Summit, the Secretary General has implemented two key high-level programs to help create these guardrails, focusing on digital public infrastructure (DPI) and artificial intelligence (AI).

2. Guardrails for Digital Public Infrastructure (DPI)

With digitally interconnected technology and computing power quickly progressing over the last few decades, the UN has focused on the rising potential these tools offer in achieving the SDGs. The Global Digital Compact was proposed to establish an “open, sustainable, fair, safe and secure digital future for all(PDF).” The Compact standardizes measures to bridge divides in access and participation in the digital world. Addressing both accessibility and fair usage of technology could potentially improve international cooperation and build capacity faster in lower- and middle-income countries across the majority world.

The Compact seeks to engage government, nongovernmental organizations, and the private sector to ensure stable digital infrastructure. It lays down the foundation for widespread access to the internet, closing digital divides “between and within countries,” and the incorporation of technology into integrated systems that serve the public interest. It also aims to develop digital public goods and digital public infrastructure (DPIs) that recognize and mitigate risks to human rights to meet the needs of each country and the international community. By creating a robust foundation for a just and equitable digital future, the Compact calls for safeguards(PDF) to accompany the growing collection and use of data to design and run social programs.

The United Nations Development Programme (UNDP) and the Secretary-General’s Tech Envoy have collectively taken an interest in DPI to encourage a “safe and inclusive digital society.” The two agencies launched an initiative in January 2024 that brings together 44 experts from around the world to draft a guardrails framework, called the Universal Safeguards Framework for Safe and Inclusive DPI. That framework will seek to ensure that DPIs follow a common set of standards translated into policy and practice. DPIs are hard to define, but most people agree that they are huge, “society wide” technologies like national digital identity programs, digital payment systems like Pix in Brazil, or “data exchanges” like a system for exchanging health data – safely and securely – between thousands of providers with the aim of offering better research and care.

The Universal Safeguards Framework for Safe and Inclusive DPI has produced its first interim report(PDF) that details the DPI framework and the guardrails that should be the backbone of the implementation of any DPI system. With the deployment of a DPI system, personal data and privacy are at risk, especially for those in marginalized groups. For instance, “DPI systems(PDF) which provide access to sensitive health data, for example related to birth or abortion, could present privacy risks, particularly for women and girls.” In cases where DPI systems allow data to be shared across agencies providing different services, “the behaviour of large portions(PDF) of the population is observable, enabling public and private sector agencies to track individuals.” With increased surveillance, a “chilling effect” occurs where people may not express their views for fear of harassment or punishment.

The Global Digital Compact also acknowledges the necessity of consent for personal data use and sharing and the high degree of security needed for protecting personal data. The ideals found in the Compact are being incorporated in parallel within the Universal DPI Safeguards Framework design process. The UN hopes these efforts will center the public interest and common good throughout the integration of DPIs in public and private institutions and in people’s lives around the world.

3. Guardrails for Artificial Intelligence (AI)

To catalyze efforts towards AI regulation, the UN Secretary-General is convening a multi-stakeholder High-Level Advisory Body on Artificial Intelligence (HLAB on AI). By harmonizing opinions from different professionals and organizations, the HLAB on AI will report their analysis and recommendations to be considered at the Summit of the Future.

The HLAB has its work cut out for it and AI is front and center in the Compact itself. Under the Compact, the United Nations recommends that the usage of AI is compliant with human rights and international law. Misinformation is a particularly striking example of how AI can quickly be used to create and spread false, though convincing, information and narratives.

Recently, Elon Musk created an AI-generated advertisement, incorporating Kamala Harris’s voice with the effect of misleading viewers about her campaign goals. The move has drawn sharp criticism and raised concerns for the upcoming U.S. election, as there was no indication whatsoever in the advertisement of its “satirical” nature. Generative AI poses new risks to the integrity and veracity of election information on the internet, jeopardizing voters’ ability to make educated decisions about their country’s leadership and future.

The usage of technology must be reflective of human rights to preserve the integrity of our government systems, and the Compact aims to mitigate the risks of generative AI by including, for example, warning labels on synthetic content and including safeguards in AI-training processes(PDF) to address strong concerns over biased training data.

Conclusion

The Summit of the Future and preceding Action Days are an opportunity for the law to anticipate and navigate concerns related to use of personal data. As stakeholders in the Compact, law students around the world can and should continually inform themselves on the UN’s projects, press for changes that align with the goals of the UN, and advocate to protect rights and promote effective remedies. The Temple University Institute for Law, Innovation & Technology (iLIT) conducts research and engages in forums related to the Global Digital Compact and other UN processes. Together, by contributing to the discussion and pushing for human rights-based policies, we get one step closer to a world in which social and political investment overcome structural discrimination and technological innovation cannot exploit vulnerability or leave anyone behind.

Bethany Ho is a second year JD/MBA candidate at Temple University Beasley School of Law and the Fox School of Business. Her interests include IP, corporate law, and cybersecurity.

This blog is a part of iLIT’s student blog series. Each year, iLIT hosts a summer research assistant program, during which students may author a blog post on a topic of their choosing at the intersection of law, policy, and technology. You can read more student blog posts and other iLIT publications here.

Similar Posts