The second day of Beaconhouse’s School of Tomorrow (SOT) Edition XIX opened with the prestigious Beaconhouse Distinguished Alumni Awards, recognising outstanding contributions of Beaconhouse graduates across professional, creative, and civic sectors. Presented by the Beaconhouse Old Students Society (BOSS), the ceremony featured Federal Minister for Information and Broadcasting Ata Tarar and Punjab Finance Minister Mian Mujtaba Shuja-ur-Rehman, who honoured alumni for their achievements and service to Pakistan and the global community.
The day then unfolded with a dynamic lineup of discussions, starting with Sarah Ahmad, MPA Punjab and Chairperson of the Child Protection & Welfare Bureau. Other prominent figures who took the stage throughout the day included Ali Azmat, Mekaal Hassan, Tahira Syed, Humayun Bashir Tarar, Deepak Perwani, Nasim Zehra, Dr Moeed Yusuf, Badar Khushnood, Naveed Shahzad, Fahad Mirza, Tamkenat Mansoor, Hajra Yamin, Amber Rahim Shamsi, and many others.
Their discussions explored a wide spectrum of themes, from the humanitarian crisis in Palestine and child safety in the digital age, to entrepreneurship, confidence, arts, culture, and Pakistan’s evolving identity. Together, these conversations reinforced SOT’s role in deepening public understanding and sparking critical nationwide dialogue.
Beyond the panels, attendees enjoyed immersive art installations, creative workshops, and the lively Social Junction, a hub of food, music, and interactive experiences. The day ended on a celebratory note with a joint concert by Farhan Saeed and Abdul Hannan.
However, the summit’s most urgent message came during the final session, “AI in the Newsroom: Friend or Foe,” powered by TechJuice as the event’s digital partner.
Talal Chaudhry, CEO of TechJuice, delivered a decisive warning: without clear national AI policies, Pakistan’s media ecosystem risks being overwhelmed by hallucinations, deepfakes, and algorithmically biased content. Participating a full house alongside Farwa Waheed, Sarfraz Ali, and Asad Shabbir, Chaudhry stressed that Pakistan needs an AI governance framework.
“We need a government-level AI policy, MoITT must step in and dictate how media and tech companies incorporate AI as a reporting tool,” he said. “Every news channel and medium needs to be crystal clear on ethical boundaries. AI can speed up work if used rightly, but it cannot replace editorial guidelines.”
“AI can definitely support journalism, but it cannot define ethical boundaries,” he added.
He highlighted bias issues embedded in major global models, noting how geopolitical limits in datasets distort neutrality. He warned that malicious actors are already exploiting these vulnerabilities to spread synthetic misinformation at scale, specifically as seen in countries like India where birth documents and identity cards are maligned to fake identity.
“User privacy is at stake now, more than ever… Growing up, us Millennials did not know how to navigate the internet bubble, but we now have a perfect opportunity. An opportunity to make the coming generations, Gen Z and Alpha, about the use of AI the right way,” Chaudhry expressed.
On the question how to mitigate AI misinformation, he highlighted the dangers of AI hallucinations and inherent biases in training data:
“DeepSeek refuses to discuss Chinese territorial issues because of built-in guidelines, while ChatGPT happily opines on which lands belong to China, but clams up on Gaza. That’s not neutrality; that’s baked-in bias from the datasets.”
Chaudhry stressed that miscreants are already exploiting these gaps to generate fake news, deepfakes, and entirely fabricated references that no fact-checker can trace.
Other panelists also echoed the urgency. Farwa Waheed, a seasoned digital journalist, highlighted practical risks:
AI speeds up Urdu content generation in under-resourced newsrooms, but without policies, it amplifies disinformation during elections – like the 2025 deepfakes we saw. We need labeling: every AI-assisted story marked clearly, or trust erodes.
Talking about how AI deepfakes work, Waheed said:
AI-generated videos are especially difficult to identify by desi people because they have a general lack of knowledge skills when it comes to anything artificially made… If we tell them a video is AI, they disregard it completely.
Waheed said she believed it was a matter of concern as a huge chunk of humans have trouble identifying AI-generated content, which directly links to it being a friend or a foe. She also shared a case from her reporting on Gaza, where AI tools like Grok helped with real-time fact-checking but required human oversight to avoid skewed outputs from biased training data.
She also joked light-heartedly that if there is a case of defamation against AI-written content, who in Pakistan will be held responsible?
Sarfraz Ali, Head of Digital Media Daily Pakistan and Daily Pakistan Global, referenced India’s 2025 AI fake ID scandals (e.g., Aadhaar deepfakes per Times of India, June 2025) as a warning for Pakistan, emphasizing AI’s data-reflective nature without inherent agendas. He cited ChatGPT’s Gaza “safety filters” creating blind spots (echoing OpenAI’s 2025 policy updates), and called for MoITT-mandated guidelines with bias audits to avert “post-truth” scenarios.
“Ultimately, content is king irrespective of the size of the media company.” Ali said on the topic of how AI is going to change media journalism in the near future. “Humanized content still ranks. So, to bypass AI, it should be humanized. It is unethical because people pay for news, and they do not want an automated, fake news… People need to control AI, which is why I believe AI is not a threat. Newsroom journalists should be trained to use AI to ally them in their work.”
Asad Shabbir, Deputy Director General at Ministry of Information and Broadcasting, represented the government perspective, and he agreed that AI can only be a friend or foe based on the use case. He said that AI can only cannibalize media and journalism only when we start thinking like AI, and not vice versa.
Talking about AI can fix terminology used by media and journalists, Shabbir said:
A lot of our language can be fixed (using AI)… There are too words: human trafficking and illegal migrating… Media often conflate and relate these two terms, but AI can help it.
However, not every moment had that serious note: the panelists also light-heartedly joked that if they wanted they could generate a fake drivers license using AI in Pakistan, which is even more of a reason to have AI policies regulated.