bart

Set 262025
 

Insights and Algorithms for Ill-Posed Problems

Prof. Lothar Reichel and Prof. Laura Dykes
Kent State University, USA

Abstract

The aim of this course is to introduce Master’s and Ph.D. students to linear ill-posed problems. Their properties and applications will be discussed. The focus of the lectures will be on solution methods for linear discrete ill-posed problems and on the numerical linear algebra required for the solution of these problems.

While there are no prerequisites, a basic knowledge of numerical linear algebra, least squares, LU, QR and SVD factorizations, and Matlab programming will be helpful for successfully following the lessons.

Outline

  1. Linear discrete ill-posed problems: Definition, properties, applications
  2. Solution methods for small to moderately sized problems: Regularization, Tikhonov regularization, the singular value decomposition, the generalized singular value decomposition, choice of regularization matrix.
  3. Solution methods for large problems: Iterative methods based on the Lanczos process, the Arnoldi process, and Golub-Kahan bidiagonalization. Regularization by Tikhonov’s method and truncated iteration. Iterative methods for general regularization matrices.
  4. lp-lq minimization for image restoration.

Schedule

  • Monday September 29, 15-18, room B
  • Thursday October 2, 15-18, room 2
  • Friday October 3, 15-18, room 2
  • Monday October 6, 15-18, room B

The first four lectures will be broadcast on Microsoft Teams for students who cannot attend in person.

The final two lectures, each lasting two hours, will be delivered on Teams after the instructors return to their offices. The schedule will be released during the lectures.

Anyone interested in participating in the course should contact the organizers, Alessandro Buccini and Giuseppe Rodriguez.

Exam

TBA

Acknowledgements

The course is partially supported by the INdAM Visiting Professors Program.

 Scritto da in 26 Settembre 2025  Senza categoria  Commenti disabilitati su PhD Course: Insights and Algorithms for Ill-Posed Problems
Mag 232025
 

Qualitative Properties of Solutions of Uniformly Parabolic Equations

Prof. Daniele Castorina
Università di Napoli Federico II

Dott. Simone Ciani
Università di Bologna

Abstract

The course is divided in two sections:

Section 1 – Second Order Parabolic Equations (Ciani)
In the first lectures we will follow Ch. VII of [1], by setting up a definition of solution of parabolic uniformly elliptic equations. In the sequel we will prove existence and uniqueness for the Cauchy-Dirichlet problem, thanks to the method of Galerkin approximations and a priori estimates. Then we will approach regularity theory: first we will prove that the unique solution to the boundary value problem proposed improves its regularity as much as the initial value datum allows, until we reach the smoothness C-infinity by a bootstrap argument. In the last lecture, time permitting, we will comment on the lack of regularity when the coefficients and the data are rough; and give a glimpse of the possible minimal regularity properties affordable, following Chap XI-XII of [2].

Section 2 – The Alexandrov-Bakelman-Pucci method and its applications (Castorina)
The classical Alexandrov-Bakelman-Pucci (or ABP) estimate is a uniform bound for strong solutions of second order uniformly parabolic operators with bounded measurable coefficients written in nondivergence form. Its main feature is being a basic tool in the regularity theory for fully nonlinear parabolic equations. However, the ABP method is fairly general and it can be adapted to a wide variety of different issues such as obtaining a maximum principle in domains of small measure, as well as simplifying the proofs of several isoperimetric and Sobolev inequalitiees. The aim of this second part course is to introduce the ABP method in detail, discuss some of its generalizations and refinements and to give a detailed and complete overview of its applications, explicitly highlighting the improvements of using this technique with respect to previous and more classical tools.

Outline

  1. Existence and Uniqueness
  2. Regularity Theory I – Improvement of Regularity
  3. Minimal Regularity II – Hölder Continuity, Harnack inequality and Applications
  4. The Alexandrov-Bakelman-Pucci estimate
  5. The Maximum Principle in small domains
  6. The Gidas-Ni-Nirenberg Theorem and Isoperimetric and Sobolev inequalities

Schedule

  • 8/7/2025 11:00 – 13:00 and 15:00 – 17:00 room A
  • 9/7/2025 11:00 – 13:00 and 15:00 – 17:00 room A
  • 10/7/2025 11:00 – 13:00 and 15:00 – 17:00 room A

Exam

The final assessment consists of one of the two choices: a list of exercises to solve (during the course) and a seminar; or a written elaborate on selected topics of the course.

References

  1. L. Evans, Partial Differential Equations, Second Edition, AMS, 1998.
  2. E. DiBenedetto, U. Gianazza, Partial Differential Equations, Third Edition, Birkhäuser, 2023.
  3. Berestycki, H., Nirenberg, L. On the method of moving planes and the sliding method, Bol. Soc. Brasil. Mat. (N.S.) 22, 1991, 1–37.
  4. Berestycki, H., Nirenberg, L., Varadhan, S. R. S. The principal eigenvalue and maximum principle for second-order elliptic operators in general domains, Comm. Pure Appl. Math. 47,1994, 47–92.
  5. Cabré, X. On the Alexandrov-Bakelman-Pucci estimate and the reversed Holder inequality for solutions of elliptic and parabolic equations, Comm. Pure Appl. Math. 48, 1995, 539–570.
  6. Cabré, X., Ros-Oton, X. Sobolev and isoperimetric inequalities with monomial weights, J. Differential Equations 255, 2013, 4312–4336.
  7. Cabré, X., Ros-Oton, X., Serra, J. Sharp isoperimetric inequalities via the ABP method, J. Eur. Math. Soc. 18, 2016, 2971–2998.
  8. Gilbarg, D., Trudinger, N. S. Elliptic Partial Differential Equations of Second Order. 2nd ed., Springer-Verlag, Berlin-New York, 1983.
 Scritto da in 23 Maggio 2025  Senza categoria  Commenti disabilitati su PhD Course: Qualitative Properties of Solutions of Uniformly Parabolic Equations
Mar 232025
 

Introduction to Compositional Data Analysis and Modelling

Prof. Fabio Divino
Università del Molise & University of Jyväskylä

Abstract

This course introduces the fundamental concepts of compositional data analysis, including the algebraic structure of the simplex and its main properties. It will then cover the basic tools for analysis before presenting the main regression models:
(a) compositional data as a predictor;
(b) compositional data as a response variable;
(c) compositional data as both predictor and response.

All topics will be explored through hands-on lab sessions in R using real data and simulations.

Outline

  • Lecture 1: Introduction to Compositional Data Analysis. ALR, ILR, and CLR Transformations
  • Lecture 2: Descriptive Analysis in the Simplex. Introduction to Regression Models
  • Lecture 3: Regression Models with Compositional Data

Schedule

The course consists of a total of 6 hours, scheduled as follows:

  • March 17, 15:00-17:00 – Aula A
  • March 19, 11:00-13:00 – Aula A
  • March 21, 15:00-17:00 – Aula A

Exam

The final exam consists of a presentation on a specific topic covered in the course.

References

  • K. Gerald van den Boogaart & Raimon Tolosana-Delgado, Analyzing Compositional Data with R, Springer, 2013.
 Scritto da in 23 Marzo 2025  Senza categoria  Commenti disabilitati su PhD Course: Introduction to Compositional Data Analysis and Modelling
Mar 122025
 

Introduction to Algorithmic Fairness: Principles, Methods and Regulatory Perspectives

Dr. Erasmo Purificato
European Commission, Joint Research Centre (JRC), Italy

Abstract

The course provides a comprehensive introduction to algorithmic fairness, exploring key concepts such as definitions, bias characterisation and the potential sources of unfairness in machine learning models. Initially, we will thoroughly examine fairness criteria, bias detection metrics, and the limitation of fairness evaluation in binary scenarios. Then, we will analyse the emerging multiclass and multigroup approaches, and cover bias mitigation techniques and their practical trade-offs. Finally, the course will examine legal and ethical frameworks governing algorithmic fairness, with a focus on EU regulations such as the GDPR, DSA, and AI Act, as well as global policies.

Outline

  • Lecture 1: Foundation of Algorithmic Fairness
    • Why fairness matters in AI and ML
    • Defining fairness and bias
    • Potential causes of unfairness in ML
    • Fairness criteria
    • Conflicts between fairness goals
  • Lecture 2 – Measuring Bias and Fairness
    • Bias detection metrics
    • Challenges in binary scenarios
    • Extending fairness metrics to multiclass and multigroup scenarios
  • Lecture 3 – Mitigating Bias
    • Bias mitigation strategies
    • Choosing the right fairness intervention
    • Trade-offs and practical implementations
  • Lecture 4 – Legal and Ethical Frameworks for Fairness in AI
    • Overview of EU Regulations affecting AI and ML (i.e., GDPR, DSA and AI Act)
    • Fairness principles in EU Regulations
    • Fairness principles in global regulations
    • The future of algorithmic fairness and open research challenges

Schedule

The course will have a total duration of 10 hours, scheduled as follows:

  • May 21, 14:00-18:00 Aula II
  • May 22, 10:30-12:30 Aula F
  • May 22, 14:00-16:00 Aula F
  • May 23, 10:00-12:00 Aula F

Exam

The final exam consists either in a seminar presentation focusing on a specific topic studied during the course or in a test held on the last day of the course. The definitive format will be announced when the schedule is finalized. The course will be held in person. Please contact me if you are interested in joining.

References

The content of the course is based (but not limited to) the following articles:

  1. Simon Caton and Christian Haas. Fairness in Machine Learning: A Survey. ACM Comput. Surv. 56, 7, Article 166 (2024). https://dl.acm.org/doi/10.1145/3616865
  2. Corbett-Davies, Sam, Johann D. Gaebler, Hamed Nilforoshan, Ravi Shroff, and Sharad Goel. The measure and mismeasure of fairness. Journal of Machine Learning Research 24, no. 312 (2023). https://jmlr.org/papers/v24/22-1511.html
  3. Dana Pessach and Erez Shmueli. A Review on Fairness in Machine Learning. ACM Comput. Surv. 55, 3, Article 51 (2023). https://doi.org/10.1145/3494672
  4. Sahil Verma and Julia Rubin. Fairness definitions explained. In Proceedings of the International Workshop on Software Fairness (FairWare 2018). https://doi.org/10.1145/3194770.3194776
  5. Purificato, Erasmo, Ludovico Boratto, and Ernesto William De Luca. Toward a responsible fairness analysis: from binary to multiclass and multigroup assessment in graph neural network-based user modeling tasks. Minds and Machines 34, no. 3 (2024). https://doi.org/10.1007/s11023-024-09685-x
  6. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation),
    2016, OJ L119/1. http://data.europa.eu/eli/reg/2016/679/oj
  7. Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act), 2022, OJ L277/1. http://data.europa.eu/eli/reg/2022/2065/oj
  8. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and
    Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act), 2024, http://data.europa.eu/eli/reg/2024/1689/oj
 Scritto da in 12 Marzo 2025  Senza categoria  Commenti disabilitati su PhD Course: Introduction to Algorithmic Fairness
Gen 142025
 

MAIN PhD Seminars 2024

Date Speaker(s)
January, 15th Sandro Gabriele Tiddia
January, 29th Andrea Azzarelli
February, 12nd Valentino Artizzu
February, 26th Simone Pusceddu
March, 12nd Nicola Piras
March, 26th Sara Vergallo
Giorgia Nieddu
April, 9th Lejzer Javier Castro Tapia
April, 23rd Matteo Mocci
May, 7th Giuseppe Zecchini
May, 21st Antonio Pio Contrò
June, 4th Matteo Palmieri
June, 18th Michele Faedda
June, 25th Giuseppe Scarpi

All the seminars at 13:00 in Aula Magna di Fisica.

 

Sandro Gabriele Tiddia: LLM Agents: Definitions and Real-World Applications

In this seminar, we explore the concept of ‘agents’ in artificial intelligence (AI), with a particular focus on the role of Large Language Models (LLMs) in powering these systems. The seminar begins by discussing a real-world application where LLMs are used to build a question-answering (QA) system, showing how LLMs can function as ‘agents’ within such a system. We then examine various definitions of ‘agent’ across AI subfields and consider how agents interact with their environment, make decisions, and pursue goals autonomously. Additionally, we revisit earlier works on agency in AI, reflecting on their original, more profound ideas, and connect them to recent developments and applications of LLM-powered agents. The seminar concludes by exploring experiments and use cases from recent literature, highlighting the capabilities and potential of LLM agents across different domains. The goal is to provide a clear introduction to the concept of LLM-powered agents, their role in AI systems, and how this concept has evolved from theoretical foundations to practical applications.

Andrea Azzarelli: Fractional Laplacian and ADMM for glyph extraction

In archaeology it is a common task to extract incisions or glyphs from a surface. This procedure is usually done manually and, therefore, it is prone to errors and it can be extremely time consuming. In this talk we present a variational model to automatically extract these incisions from a smooth surface. We provide a procedure to generate realistic synthetic data and we show the performances of the proposed method on this kind of data.

Valentino Artizzu: End-User Development for Extended Reality: Empowering Users to Create and Understand XR Environments

In this seminar we will see how to enable individuals without prior XR development experience to create and understand XR environments. It focuses on using End-User Development (EUD) techniques to allow users to design, build, and adapt XR systems. The research specifically explores methodologies and tools for non-programmers to construct XR environments. It examines how EUD can facilitate novice developers in comprehending existing VR environments for enhancement purposes. The study further investigates how EUD can empower domain experts to tailor these environments to meet diverse requirements. Additionally, it delves into how EUD can guide domain experts in configuring XR environments to support task learning and demonstration. The goal is to provide a clear overview of the topic using the experiences and the applications developed during the three years of a PhD career.

Nicola Piras: Global and local fit measures for latent class models and extensions

Latent class (LC) analysis is a powerful and flexible statistical tool for model-based clustering with categorical data. An important task in LC analysis is the choice of the number of clusters or classes. The choice of the number of classes is a selection model problem and usually Information Criteria are considered for this purpose. These are measures that weigh model fit (log-likelihood) and model complexity (based on the number of free parameters). The LC models formulation is subject to an assumption of conditional independence between the variables involved. Adherence to this assumption and the correct estimation of the parameters is central to evaluate how well the model fits to the data. While global selection and the goodness of fit is verified through Information Criteria, model conditions must also be checked. Specific statistics can be defined that allow to verify the local independence assumption. In the literature of LC analysis the statistics considered are the Bivariate residuals. The standard LC model can be modified to handle more complex data structures, and fit measures must be adapted to the new formulations. In this talk, after having briefly discussed the results in the standard formulation, I will also present the extension to the case in which data have a multilevel cross-classified structure. This structure is present when observations are simultaneously nested within two groups, for example, children nested within both schools and neighborhoods. An application is illustrated using an Italian dataset on the evaluation of students of their degree programmes, where degree programmes are nested in both universities and fields of study.

Sara Vergallo: Mathematics as a foundation for learning Machine Learning from primary school onward

The proliferation of artificial intelligence (AI) in many people’s daily lives has led the education sector to recognize the importance of teaching elements of AI—and in particular Machine Learning (ML)—from the earliest stages of schooling (Karalekas et al., 2023). Consequently, there is a need for resources, guidelines, and studies on the feasibility and methods for integrating ML into lower‐school settings, beginning in kindergarten (Sanusi et al., 2023), so that children can understand how the machines they interact with every day work (Lin & Brummelen, 2021). Teaching machine learning in primary and secondary school is very challenging, also due to students’ deficiencies in data analysis—especially in classification (sets) and data representation (trees, two‐way tables) (Grillenberger & Romeike, 2019)—an essential competency included in mathematics curricula from primary school onward and in secondary‐school computer science curricula, yet insufficiently promoted or stimulated (Grillenberger & Romeike, 2019). We investigated the level of these mathematical skills in a fifth‐grade class at a primary school and proposed a didactic pathway composed almost entirely of unplugged activities for learning the basics of ML. The activity fostered an improvement in the children’s classification and representation skills and received a high level of enthusiasm; this latter point is significant, as one of the critical issues identified in the literature is the lack of engaging activities within the school context (Grillenberger & Romeike, 2019). The results from these in‐class research activities will then be considered within a broader research overview concerning the use of non‐standard approaches (such as game‐based learning) to enhance mathematical skills, which are also necessary for a better understanding of computer‐science content.

Giorgia Nieddu: Use of GenAI to Support Learning and Teaching Mathematics

This seminar will present our recent experiments on the use of large multimodal models for learning and teaching mathematics. The first, conducted in North Macedonia, explored how students interact with GenAI in electronics problem-solving activities with mathematical content; the second, carried out in collaboration with the University of Turin, investigated how GenAI can support teachers in preparing educational materials. In this latter study, conducted within teacher training courses, we observed teachers’ behaviors and attitudes, aiming to understand whether AI could be useful in their lesson planning and how.

Lejzer Javier Castro Tapia: Mod-2 Cohomological Classification of Orbit Spaces of Free Involutions on the 2-Fold Projective Product Space

The action of a compact Lie group on a topological space describes the symmetries of our space, in that sense the properties of the orbit space of these actions has attracted many mathematicians over the world since the beginning of the twentieth century. In this talk we present in an informative way a cohomological classification via spectral sequences of orbit spaces of free involutions on the two-fold projective product space, a manifold that generalizes the usual projective spaces and wich was introduced for the first time by Donald Davis in 2010.

Matteo Mocci: Automatic Walkability Assessment using AI and multi-input image classification

Walkability is a key element of sustainable and livable cities, influencing public health, environmental impact, and social connectivity. Traditional methods for assessing walkability, such as surveys and audits, are often time-consuming and limited in scale. Recent advancements in artificial intelligence and computer vision offer new opportunities to automate and enhance these assessments using street-level and aerial imagery. This seminar explores how deep learning and multi-perspective image analysis can provide more comprehensive and scalable walkability evaluations. By integrating insights from urban planning, AI, and geospatial analysis, we will discuss the potential and challenges of these emerging technologies in shaping more pedestrian-friendly cities.

Giuseppe Zecchini: On the algebraic study of substructural logics by means of Płonka sums

Logic can be intuitively defined as the science of correct reasoning: given a certain set of premises, we want to be able to establish their consequences. Algebraic Logic can naively be defined as the study of Logic through the methods of Algebra. In the first part of this talk, we will explain in detail what a logic formally is and what it means to study it algebraically. In the second part, we will introduce and motivate substructural logics, which are traditionally defined by means of Gentzen-style sequent calculi in which one or more of the structural rules (exchange, weakening, contraction) of Gentzen’s LK calculus for classical logic are restricted or eliminated. Finally, in the third part, we will present the algebraic counterpart of substructural logics, residuated lattices, and make some remarks on the study of their structure through the construction of Płonka sums, a construction introduced in Universal Algebra in the 1960s by the eponymous Polish mathematician.

Antonio Pio Contrò: A Recent Specialization on the Notion of Formality

In topology, we can intuitively define the homotopy type of a space as “its behavior up to continuous deformations”. In the 1970s, Daniel Quillen and Dennis Sullivan developed the so-called rational homotopy theory for topological spaces, an elegant framework that makes it possible to completely understand a specific homotopy type (called the rational homotopy type) of certain topological spaces from a special algebraic structure associated with each of them: the minimal model of the space. In this context, the notion of a formal space was introduced for the first time. Roughly speaking, a formal space is a topological space for which the rational homotopy type can be reconstructed from its cohomology algebra with rational coefficients. The term “formal” comes from the fact that “the rational homotopy type and the minimal model are a formal consequence of the cohomology algebra.”. More recently, in the context of complex geometry, new notions of formality have been introduced for complex manifolds by J. Stelzig (LMU) and A. Milivojević (University of Waterloo). In this talk, we will present both notions of formality, beginning with the definition of a complex manifold and introducing the associated algebraic structure on differential forms. We will conclude with a brief, general overview of the existing relationships among the various concepts of formality.

Matteo Palmieri: A First Approach to the Bergman Kernel and Metric

The Bergman kernel and metric have been foundational tools in geometric analysis, since their invention by Stefan Bergman in 1922. The basic idea is to associate each bounded domain in the complex Euclidean space with a special Hilbert space — the square-integrable holomorphic functions on the domain — and study the geometric properties of such domain by analyzing the reproducing kernel of the space and the derived metric. From this concept, the Bergman kernel and metric evolved into powerful instruments for function theory, analysis, differential geometry, partial differential equations. The objective of this seminar is to present the Bergman kernel and metric, focusing on the well-behavior of these objects. To this end, we will review the necessary background on this topic and provide concrete examples and possible applications. Some natural questions will arise along the way, leading to a brief overview of the main directions of the current research.

Michele Faedda: Boolean Operators: Practical Problems and Real Applications

Boolean operations on 3D meshes — such as union, intersection, and difference — are essential in computer graphics, CAD, and digital fabrication. Despite their wide use, performing these operations robustly remains a challenge due to the limitations of floating-point arithmetic, which can lead to topological errors and unreliable results. This seminar explores the geometric and computational difficulties behind mesh-based Boolean operations, and presents a hybrid solution that combines floating-point and exact arithmetic to improve robustness without sacrificing efficiency. We will discuss how this approach enables more reliable modeling, and highlight potential applications, including version control for 3D assets through mesh differencing like git system.

Giuseppe Scarpi: GENIS, a simple and explainable algorithm for sentiment analysis of e-commerce reviews

GENIS is a simple algorithm that analyzes textual e-commerce reviews, calculating a rating that is more reliable than the typical “stars.” Compared to many similar algorithms, GENIS aims to be more explainable, providing elements that help users understand why a certain score was given. We will look at what GENIS does, but also the path taken to develop it—including mistakes and missteps—because sometimes, the best part of the journey is the road itself, not the destination.

Gen 102025
 

Interpretable and Explainable Machine Learning Models

Dr. Claudio Pomo
Politecnico di Bari

Abstract

The course focuses on methods for interpreting and explaining machine learning (ML) models, including inherently interpretable approaches and post-hoc explanation techniques. Key concepts of interpretability will be introduced, alongside the analysis of interpretable models and the application of explanation methods for complex models. The course critically evaluates existing techniques in terms of fidelity, stability, fairness, and practical utility, while addressing open challenges and future perspectives.

Schedule

The course will have a total duration of 10 hours, scheduled as follows:

  • March 20, 15:00-17:30 Aula II
  • March 21, 10:00-12:30 Aula F
  • March 24, 15:00-17:30 Aula II
  • March 25, 10:00-12:30 Sala Riunioni II piano

Exam

The final exam consists of a project analyzing a case study using the techniques and tools acquired during the course. The course will be held in person. Please contact me if you are interested in joining.

References

  1. Lundberg, S. M., and Lee, S.-I. A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, 2017.
  2. Ribeiro, M. T., Singh, S., and Guestrin, C. Why should I trust you? Explaining the predictions of any classifier. Proceedings of the ACM SIGKDD, 2016.
  3. Molnar, C.. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. 2nd edition, 2022.
  4. Doshi-Velez, F., and Kim, B. Towards a rigorous science of interpretable machine learning. arXiv preprint, 2017.
  5. Agarwal, C., Krishna, S., Saxena, E., Pawelczyk, M., Johnson, N., Puri, I., … & Lakkaraju, H. Openxai: Towards a transparent evaluation of model explanations. Advances in Neural Information Processing Systems, 2022
 Scritto da in 10 Gennaio 2025  News, News, News, News, News, News  Commenti disabilitati su PhD Course: Interpretable and Explainable Machine Learning Models
Ott 242024
 

Introduction to algebraic logic

Dr. Nicolò Zamperlin
Università degli Studi di Cagliari

Abstract

The course is an introduction to the theory of algebraizability of Blok and Pigozzi. Through an analytic study of the first chapters of Font’s handbook on abstract algebraic logic we will first introduce the elementary notions of universal algebra needed for linking together logic and algebra (closure operators and their lattices, varieties, quasivarieties and equational consequences), then building upon these notions we will consider the case of implicative logics and their algebraic properties, introducing the technique of completeness through the Lindenbaum-Tarski process. Finally we generalize these notions to the class of algebraizable logics (with a glimpse to the larger Leibniz heirarchy), with the ultimate goal of proving the isomorphism theorem and the transfer for the deduction theorem.

Schedule

The course will have a duration of 20 hours, scheduled as follows:ˆ

  • November 7, aula B, h. 15-17
  • November 14, aula A, h. 15-17
  • November 22, aula II, h. 9:30-11:30
  • November 29, aula II, h. 9:30-11:30
  • December 2, aula II, h. 9:30-11
  • December 6, aula II, h. 9:30-11:30
  • December 12, aula B, h. 15-17
  • January 15, aula B, h. 10-12
  • January 20, aula B, h. 15-17
  • February 3, aula B, h. 10-12

Exam

The final exam consists in a seminar presentation. The course will be held in person. Please contact me if you are interested in joining the course

References

  1. Bergman, C., Universal Algebra: Fundamentals and Selected Topics, Chapman & Hall Pure and Applied Mathematics, Chapman and Hall/CRC, 2011.
  2. Blok, W., and Pigozzi, D., Algebraizable logics, vol. 396 of Memoirs of the American Mathematical Society, A.M.S., 1989.
  3. Burris, S., and Sankappanavar, H.P., A course in Universal Algebra, freely available online: https://www.math.uwaterloo.ca/snburris/htdocs/ualg.html, 2012 update.
  4. Czelakowski, J., Protoalgebraic logics, vol. 10 of Trends in Logic: Studia Logica Library, Kluwer Academic Publishers, Dordrecht, 2001.
  5. Font, J.M., Abstract Algebraic Logic: An Introductory Textbook, College Publications, 2016
 Scritto da in 24 Ottobre 2024  Senza categoria  Commenti disabilitati su PhD Course: Introduction to algebraic logic
Set 232024
 

Conic Programming: Theory and applications

Prof. Benedetto Manca
Università degli Studi di Cagliari

Abstract

The course covers the theory of conic programming, starting from the simplest case of linear programming and introducing conic quadratic and semi-definite programming. The first part of the course will introduce the theoretical backgrounds needed to define the concept of conic programming. In the second part the case of conic quadratic and semi-definite programming will be addressed together with some applications.

Outline

  • From Linear to Conic Programming
  • Conic Quadratic Programming
  • The quadratic formulation of the Distance Geometry Problem
  • Semi-definite Programming
  • The semi-definite relaxation of the Distance Geometry Problem
  • Diagonally dominant matrices and positive semi-definite matrices
  • The ellipsoidal separation problem

Schedule

The course consists in 10 hours, two lectures per week. Details will be specified on the occasion of the first lecture, which will be given on October 3, 2024 at 2:30 p.m. in room B of the Department of Mathematics and Computer Science.

Exam

The final exam consists in a presentation on a specific application of conic programming (conic quadratic or semi-definite).

References

  1. Ben-Tal, Aharon, and Arkadi Nemirovski. Lectures on modern convex optimization: analysis, algorithms, and engineering applications. Society for industrial and applied mathematics, 2001.
  2. Liberti, Leo. Distance geometry and data science. Top 28.2 (2020): 271-339
  3. Astorino, Annabella, et al. Ellipsoidal classification via semidefinite programming. Operations Research Letters 51.2 (2023): 197-203.
Set 122024
 

Introduction to Kähler Geometry

Prof. Roberto Mossa, Prof. Giovanni Placini
Università degli Studi di Cagliari

Abstract

This introductory course covers some of the fundamental concepts of Kähler geometry, with particular attention to almost complex and complex manifolds, the properties of Hermitian metrics, and Kähler metrics. Starting from the basics of differential geometry, we will explore the structure of almost complex and complex manifolds. Subsequently, we will delve into the properties of Hermitian metrics, focusing on the definition and characteristics that define Kähler metrics, which play a key role in integrating the complex structure with the Riemannian one. Through concrete examples and applications, students will gain a deep understanding of these concepts, preparing them for advanced studies in Kähler geometry.

Schedule

The course consists of 32 hours divided into 16 lectures. This is the schedule of the lectures:

  • Martedì 14 Gennaio 15-17 Aula III (Giovanni Placini)
  • Giovedì 16 Gennaio 11-13 Aula III (Giovanni Placini)
  • Martedì 21 Gennaio 11-13 Aula III (Giovanni Placini)
  • Giovedì 23 Gennaio 11-13 Aula III (Giovanni Placini)
  • Martedì 28 Gennaio 11-13 Aula III (Giovanni Placini)
  • Giovedì 30 Gennaio 11-13 Aula III (Roberto Mossa)
  • Martedì 4 Febbraio 11-13 Aula III (Roberto Mossa)
  • Giovedì 6 Febbraio 11-13 Aula III (Roberto Mossa)
  • Martedì 11 Febbraio 11-13 Aula III (Roberto Mossa)
  • Giovedì 13 Febbraio 11-13 Aula III (Roberto Mossa)
  • Lunedì 17 Febbraio 11-13 Aula III (Roberto Mossa)
  • Giovedì 20 marzo 11-13 Aula F (Roberto Mossa)
  • Giovedì 27 marzo 11-13 Aula F (Roberto Mossa)
  • Giovedì 3 aprile 11-13 Aula F (Giovanni Placini)
  • Martedì 8 aprile 11-13 Aula F (Giovanni Placini)
  • Giovedì 10 aprile 11-13 Aula F (Giovanni Placini)

Exam

The final exam consists in a seminar on a topic building on the content of the course. The topic for the final exam may be proposed by the students themselves or chosen from a list provided at the end of the lectures.

Ago 212024
 

Numerical Analysis with Deep Neural Networks

Prof. Yuesheng Xu
Old Dominion University and Syracuse University, USA

Abstract

This four-talk lecture sequence aims to introduce numerical analysis with deep neural networks. Traditional function classes used in numerical analysis include polynomials, trigonometric polynomials, splines, finite elements, wavelets, and kernels. Deep neural networks were recently employed in numerical analysis as a class of approximation functions, demonstrating advantages over traditional function classes. These talks will cover the following topics:

  1. Deep neural network representation of a function
  2.  Optimization problems that learn a neural network
  3. Adaptive solutions of integral equations with deep neural networks
  4. Adaptive solutions of partial differential equations with deep neural networks.

Schedule

September 12nd, 9.30-11.30 Aula A
September 13nd, 9.30-11.30 Aula A

 Scritto da in 21 Agosto 2024  Senza categoria  Commenti disabilitati su Seminar: Numerical Analysis with Deep Neural Networks
contatti | accessibilità Università degli Studi di Cagliari
C.F.: 80019600925 - P.I.: 00443370929
note legali | privacy