Fill out the form to stay updated on ecosystem innovation topics, activities and opportunities Cetif
More than 30,000 professionals make up the ecosystem of Cetif: we facilitate the meeting and exchange between banks, insurers and companies in an academic Center, competent and independent environment to share knowledge, experience and strategies on the most innovative drivers of change.
16 Research Hubs focused on dynamics of strategic evolution, regulatory updates, organizational and process practices, and the effects of digitization: we study innovation trends and best practices and share them with our communities.
Over 60 events including Main events (Workshop and Summit) and Community events (related to research activities) and Webinar: we bring together banks, insurance companies and businesses for shared growth on trends and challenges to outline innovative development strategies.
More than 40 Executive Education tracks, 4 Master's programs and numerous Company Specific Programs: we transfer innovative financial-oriented content with a scientific approach.
An experimental spin off combining academic research and entrepreneurial approach: we turn innovation and digitization into a concrete business advantage.
Generative artificial intelligence (GenAI) is profoundly transforming the financial landscape. Far from being a simple technological evolution, this new frontier marks a structural change that affects operational processes, organizational arrangements, and governance models.
Today, however, the picture of AI projects in the industry reflects a sharp differentiation between established technologies and emerging fields. Currently, 68 percent of AI projects focus on Machine Learning, while 32 percent involve Generative AI. This difference is also reflected in the status of progress: in fact, Machine Learning is mostly in the full adoption or maturity stage (77 percent), while Generative AI is mainly in the study (47 percent) or experimentation (45 percent) stage, with only 7 percent of projects already in production or in the launch phase (Source: Cetif Research Insight Advanced Analytics & AI Hub 2024).
These data highlight how Machine Learning is now integrated into many business processes, representing a solid and established technological foundation. In contrast, Generative AI remains a frontier still being defined and developed, with potential yet to be explored.
However, the development of Generative AI solutions is proceeding at a significantly faster rate than previous waves of technology. This speed is accompanied by increasing complexity in terms of governance and control, requiring financial institutions to rethink their strategies in a systemic and conscious manner: from data governance to the development of new operational models and the revision of control systems. The ongoing evolutionary path will progressively lead to the adoption of increasingly sophisticated architectures based on multi-agent models (agentic AI).
Unlike generalist foundation-type models, which are designed to be versatile and applicable to a multiplicity of tasks, multi-agent architectures are based on cooperation among specialized generative agents, each designed to perform a specific function in an efficient, precise and traceable manner.
Compared with general purpose models, in fact, multi-agent systems offer several advantages. First, they improve the accuracy of responses through specialization: each agent can be calibrated to a narrow information domain, using data and logic optimized for the relevant context, thus reducing the risk of errors or arbitrary interpretations. Second, multi-agent models provide greater fairness and transparency.
The distribution of tasks among separate agents makes it easier to identify and monitor individual steps leading to a decision, enabling granular and more accessible forms of explainability even for control functions.
Another distinctive aspect relates toadaptability: in dynamic markets such as financial markets, where the context changes rapidly, the ability of agents to independently update strategies and logics of action ensures greater resilience and business continuity. If one component loses effectiveness, the architecture is able to redistribute loads or update individual agents without compromising the entire system.
However, the development of generative models faces the need for available data of high quality, forming the basis for the training, validation and operation of generative models. However, the paradigm today is complicated by the presence of alternative data sources, such as unstructured data. Texts, documents, system logs, images, voice and behavioral content are crucial information assets for the development of new models, but they require rigorous normalization, validation and traceability processes. Added to this is the growing relevance of synthetic data, artificially generated to overcome limitations related to privacy or scarcity of real data: while they represent a strategic asset, they also impose new governance and control challenges.
To address this complexity, financial institutions are progressively reviewing their data governance models, extending the scope of data governance policies from structured data to all data types, adopting shared rules, measurable quality indicators, lineage tracking tools and access management mechanisms. Only by expanding governance to these new sources will it be possible for financial institutions to effectively leverage unstructured data and synthetic data in training AI models. In fact, to date, only 10 percent of institutions significantly use unstructured data for this purpose, while none employ synthetic data (source: Cetif Research Insight Advanced Analytics & AI Hub 2024).
Against this backdrop, the LLMOps (Large Language Model Operations) framework, which is an evolution of the MLOps (Machine Learning Operations) operating model, is established, adapting to the specifics of generative foundation models. LLMOps is a model that spans the entire model lifecycle, from design to deployment to continuous monitoring and risk management.
This approach is not limited to the technical management of models, but introduces practices such as prompt engineering, intelligent orchestration between cloud and edge, integration of Retrieval-Augmented Generation (RAG) techniques, and definition of metrics to assess consistency, reliability, and risk of generated content.
The shift to LLMOps logics, therefore, requires a profound cultural change within organizations. GenAI, in fact, requires an organizational transformation that involves all functions, from business to internal controls, in a coordinated way, considering that for the success of the generative solutions developed each internal stakeholder assumes a crucial role. An emblematic example is prompt management, which becomes a key activity, since the quality of outputs depends to a large extent on how prompts are formulated to the agent.
Moreover, the probabilistic nature of GenAI, characterized by emergent and sometimes unpredictable behaviors, renders traditional static control garrisons inadequate, introducing unprecedented challenges for control functions, particularly Compliance. The latter will be called upon to play an increasingly strategic role throughout the entire lifecycle of models, in a context where human oversight remains central due to the human-in-the-loop approach. To ensure effective oversight, however, it will be necessary to strengthen collaboration between regulatory compliance experts and development teams, fostering cross-functional expertise and shared processes for monitoring, calibration, and ongoing validation of generative models.
Ultimately, the adoption of GenAI in the financial sector cannot be approached as a simple technological challenge, but requires a systemic transformation, combining operational evolution, control model innovation and a renewed organizational culture. Success in the deployment of multi-agent architectures and LLMOps frameworks will depend on the ability of institutions to integrate heterogeneous expertise, redefine data governance principals, and develop adaptive oversight mechanisms capable of responding to the dynamic and probabilistic nature of these systems. In this context, only a collaborative, accountable and knowledgeable approach will ensure that GenAI is used in a way that is both responsible and accountable.