This year’s Dell Technologies Forum once again attracted crowds to Warsaw – over 2,700 guests is a result that shows the scale of this cyclical event. The keynote “AI. Simply.” sounded like a provocation, but it soon became clear that the organisers did not mean to trivialise AI, but to signal a change. The forum loudly proclaimed that the time for experimentation and Shadow AI should be over. Now is the time for implementations that can be used effectively in business and measure real return on investment.
Dell’s prescription for this new phase is to be the AI Factory (AI Factory), a concept of an implementation plan for companies. Crucially, this plan is based on the thesis that the success of AI today is determined not only by the algorithm itself, but equally by having the right infrastructure capable of processing data close to where it is created (Edge Computing).
I decided to step away from the official agenda to see what was real behind the slogan “AI. Simply.”. The key question became why the transition from uncontrollable Shadow AI to quantifiable, secure projects is so difficult. The issue was explored at the Forum from a variety of perspectives: from strategic dilemmas in boardrooms, to the realities of cyber security on the frontline, to frank conversations about the technical pain points of AI.
Global vision – “AI follows the data”
The energy on the main stage could be shared by several smaller conferences. Dariusz Piotrowski, VP and Managing Director of Dell Technologies Poland, opening this year’s forum, did not hide his pride. “We have been preparing for this day all year. This is our holiday,” he said, welcoming the record-breaking audience of almost 3,000.
 
After this energetic introduction, the lights dimmed and Michael Dell appeared on the screen. His virtual speech was like a bird’s-eye view: global, calm and drawn in a thick line. The message was clear: the revolution has already happened. “Soon, more than 75 per cent of corporate data will be processed at the network (edge) edge”. – he pointed out, adding that this is no longer a vision, but a fact: “Today, more than 3,000 customers are using Dell AI factories.”
The discussion took a practical turn when Dariusz Piotrowski returned to the stage to relate the global vision to the local context. He decided to take a quick poll of the audience: “How many people in the room use AI solutions on a daily basis (…) and make decisions based on information from them?”. The significant number of hands raised served as a starting point for a key diagnosis. Although Piotrowski commented on the turnout in jest, he immediately moved on to the substance. He pointed out that in most cases this widespread activity is not official company projects, but a phenomenon that can be called Shadow AI.
And this is where we got to the point. “We used to say that technology changes the world. Today we say AI is changing the world,” he began. “Whereas the truth is that AI only changes what it has access to. And it has access to data.”
Moments later, the key phrase that has become the mantra of the entire forum was uttered: “AI follows the data, not the other way around”.
Piotrowski did not sweep the problems under the carpet. He bluntly cited data (McKinsey, IDC) showing that only 10 per cent of AI projects are realistically generating profits, and that many will fail. The diagnosis was brutal but honest: the problem is not a lack of talent or algorithms. The problem is chaos. “Success has come to those companies that have been able to organise their data and their processes,” he said.
“How?” – a pragmatic action plan
While the vision sets the strategic horizon, real success is determined by pragmatism and effective implementation. This key executive perspective was presented by Said Akar, Senior Vice President at Dell Technologies. His presentation answered the fundamental question of “how?” – how to turn ambitious ideas into a working, profitable process?
However, before presenting his concept of the ‘AI Factory’, Said Akar began with the most convincing proof of its effectiveness. He used as a key case study of a successful AI transformation … Dell itself.
The highlight of this analysis was the hard financial data. “Last year we added $10 billion in revenue. And we did that while reducing costs by 4%,” – he announced. To make the full point of these figures ring out, Akar added a key clarification. He stressed that this was a historic moment for the company – it was the first time it had been able to “decouple revenue growth from cost growth”. For decades, revenue growth inevitably meant operating cost growth too. Implementing AI, he argued, allowed the company to handle this additional volume of business while reducing costs. For a room full of executives, this was the strongest evidence yet that AI transformation is not a cost, but a quantifiable investment.
What was to make this success possible is the Dell AI Factory – the AI Factory. Akar described it as a ‘blueprint’ for organisations that don’t know where to start.
Interestingly, this plan does not start at all with the question “which server to buy?”. It starts with a use case. Akar revealed that when Dell started their journey, they had 1,800 grassroots ideas for AI, which could potentially lead to ‘total chaos’. So they changed their approach to a top-down one, setting an ironclad rule: “We will not invest our time, effort if we cannot measure the return on investment (ROI).”
He cited the internal ‘Dell Sales Chat’ tool as an example. The team implemented it in just four months. Sounds great? This is where the catch lies, as Piotrowski has already mentioned. The biggest challenge was not the algorithm. “It took us, the data company, four months to put it all together,” Akar admitted, stressing that sorting out the data was the biggest challenge of the project.
‘AI factory’ is also about dispelling a few myths. Firstly, it’s a team game – Dell heavily emphasises its ecosystem of partners (Nvidia, Intel, Microsoft, Meta). Secondly, the public cloud is not the only answer. Akar cited data that 62% of respondents find it more cost-effective to run AI locally (on-prem). AI is ‘performance hungry’ and needs to be close to the data, which dovetails perfectly with the vision of Edge Computing from the first presentation.
Finally, an important point was made for those who are afraid of cost. AI infrastructure does not have to mean building a supercomputer from scratch. Akar stressed the importance of ‘right-sizing’ (right-sizing): “You can start small and begin to grow gradually. Thus, a revolution can be started with the method of small steps.
AI in Polish – debate on trust and competition
The session that most firmly grounded the discussion in Polish realities was the panel ‘AI.Simply. Practice and real-world transformation in Poland’. The discussion room was filled with the audience, which was not surprising given the composition of the guests. In addition to moderator Robert Domagała (Dell) and Dariusz Piotrowski (Dell), Radosław Maćkiewicz (CEO of the Central Informatics Centre), Zbigniew Jagiełło (innovator and creator of Blik) and Sebastian Kondracki (Chief Innovation Officer of Deviniti and godfather of the SpeakLeash project) sat on stage.
 
Radosław Maćkiewicz talked about the gigantic capital at the disposal of the state – the trust of 11 million users of the mCitizen application. On this foundation, the COI is now building a virtual assistant to help citizens. And here a key declaration was made: the heart of the system will be PLLuM, a Polish language model trained specifically for the administration. As Maćkiewicz vividly put it, it will not be a model that ‘will give you the recipe for apple pie’, but one that will precisely guide you through the meanders of official matters.
The initially quiet discussion was enlivened by Zbigniew Jagiełło, who asked the panellists a direct question about the point of having two separate, large Polish language models: “Why don’t you join forces? Is Poland so strong (…) that both PLLuM and Bielik can develop separately?”.
The question sparked a discussion that clearly outlined the differences in approach to AI development in Poland. Sebastian Kondracki, representing the open-source Bielik, defended the value of competition, citing a concept created by Marek Magrysio of Cyfronet, an institution that supports both models – ‘copetition’, a combination of cooperation and competition. “We cooperate, we exchange information, but a small element of rivalry won’t hurt” He stressed at the same time that Bielik’s goal is not to race against OpenAI, but “for Poland to be a specialist in specialised models or niche models”.
In a similar vein was Radosław Maćkiewicz, who pointed out that the state-owned PLLuM is trained for precisely such specialised tasks – it is not supposed to “give the recipe for apple pie”, but to precisely support the citizen in official matters.
Dariusz Piotrowski also spoke in the discussion, suggesting that perhaps Poland does not need a single ‘ChatGPT’, but precisely many specialised models, dedicated to specific industries, such as energy or medicine. The debate thus clearly showed that different visions are clashing on the Polish AI scene – from strategic state projects, to competitive open-source models, to the idea of distributed, industry-specific specialisation.
AI on the front line and in the boardroom
Security was another key topic raised at the forum, following an intense discussion of Polish AI models. This fundamental issue was discussed in two contexts: cyber security in the military dimension and risk management at the strategic level in companies.
Of particular interest was the fireside chat featuring Major General Karol Molenda, Commander of the Cyber Defence Forces Component. Dariusz Piotrowski, who moderated the chat, noted that the general combines the military and business perspectives with great ease, which was confirmed in the presentation of the army’s innovative approach to cyber security. General Molenda detailed the ‘Cyber Legion’ project, an initiative to integrate civilian specialists with military experts. The success of this project is evidenced by the fact that it has so far attracted more than 2,500 applicants.
 
General Molenda pointed to a fundamental paradigm shift in army operations as a key element of modern cyber defence. He explained that the unit has moved away from the restrictive ‘need to know’ principle to an open ‘need to share’ strategy. He stressed that in the cyber security domain, the ability to share information quickly is critical to the effectiveness of operations.
In this context, he discussed the role of artificial intelligence, which has evolved from a passive monitoring to an active support tool. The general described AI as a disruptive technology (‘game changer’) that enables the analysis of reconnaissance data on an unprecedented scale and supports decision-making processes. As an example, he cited the systems’ ability to calculate the probability of success of an operation in real time, based on specific orders and changing conditions.
Equally strategic was the discussion during the panel “AI in management: revolution or controlled experiment?”, held as part of the Executive Business Lounge. Leaders from business (Orange, mBank) and science (Cambridge) grappled with key challenges: How to manage innovation and risk when employees are already using unregulated tools en masse (so-called ‘Shadow AI’)?
Dr Paul Calleja, director of the Cambridge Open Zettascale Lab, gave the perspective of the science and R&D world. He diagnosed that corporations are ‘playing catch-up’ in trying to regulate technology that is evolving too fast. He pointed to the phenomenon of “two IT lives” – the corporate one, blocked, and the private one, where employees freely use ChatGPT. In his view, blocking tools is ineffective. Education becomes the key solution: ‘We need to teach people to think critically and think analytically’ because, as he aptly pointed out, AI models are often trained to ‘sound convincing’ and not necessarily tell the truth.
A practical response to this demand was presented by Bożena Leśniewska (Orange Polska). She described how her company, instead of locking down tools, opted for ‘controlled democratisation’ – a structured process was created for bottom-up initiatives. The result was “250 different use cases that were translated into real business cases”. She emphasised that the foundation of this success was precisely the massive education, with more than 5,000 employees trained in Responsible AI (among other things).
The perspective of the banking sector, focused on risk management, was brought by Agnieszka Słomka-Gołębiowska (mBank). She acknowledged that banks are moving from analytics (e.g. complaints handling) to predictive (proactive fraud detection). However, she pointed out that algorithm bias (bias) remains a key challenge. Here a fundamental warning was given for the entire industry: ‘The biggest risk would be bias. (…) Banks are in the business of trust. This is what we provide to our customers”. As she stressed, trust is more valuable than any optimisation, and AI must not become a tool for manipulation.
The whole strategic and highly valuable discussion was accurately summarised by Dell’s Mohammed Amin, giving managers a simple piece of advice: “Think big, start small” (Think global, start local).
From ‘Edge’ to the fight against hallucinations
A technical complement to the strategic discussions was a presentation by Wojtek Janusz, Data Science and AI Lead at Dell Technologies. Acting as the company’s ‘translator’ between business and engineering, Janusz outlined the key technology challenges currently facing the industry.
He confirmed the growing importance of the ‘AI on the Edge’ trend that Piotrowski and Akar spoke about earlier. This refers to the ability to run powerful AI models locally, on laptops, which Janusz admitted “was completely unworkable just two years ago” but is now becoming an everyday occurrence.
 
Janusz then directly addressed the biggest pain point of current models, calling hallucinations ‘the current No.1 problem of the entire industry’. He explained that all the giants of the large language model market today are focusing their efforts precisely on combating algorithm confabulation. The key is supposed to be a fundamental change in training philosophy: instead of rewarding a model for an answer that is supposed to ‘satisfy a human’ (even if it is made up), the new paradigm involves punishing improvisation. “We are now changing that paradigm and saying: if you don’t know, say you don’t know,” – explained Janusz.
This frank take on the matter summed up the entire Dell Technologies Forum perfectly. The days of ‘magical AI’ replacing humans seem to be a thing of the past (‘It hasn’t happened and won’t happen,’ Janusz quipped). It is being replaced by the concept of an ‘AI Factory’, where technology is supposed to take over the ‘thankless tasks’ of things we don’t like to do, but it is the human, within the ‘human in the loop’, who still makes the final decision. The forum showed how to build such a factory. But it is up to us to decide who pulls the levers in it.
I left this year’s Dell Technologies Forum with one dominant feeling: this is the end of ‘magic’ and the beginning of an era of mature engineering. After years dominated by hype and fears of ‘Shadow AI’, the discussion had finally shifted to the right track. The real protagonists of the conference were not algorithms, but hard data (like Dell’s 10 billion in revenue), pragmatism (fighting hallucinations) and accountability (from trust in mBank to General Molenda’s cyberfront).
As editors of Brandsit, we were media patrons of the event.
 
					 
							
 
			 
 
 
 
		 
 
 
		 
		