Projects from Spring 2026

Automated Water Data Validation

By Verbelco

About the Company

At Verbelco, a hydrological data company, we support nature management organizations in
measuring groundwater and surface water levels by providing both the necessary physical
infrastructure and specialized software. Our web-based platform, WaterWeb, allows clients
to monitor water data effectively, enabling them to take action whenever necessary.

About the Project

Water levels are recorded either manually by volunteers or automatically by sensors. These
sensors measure the pressure inside a measuring tube, which is then converted into a water
level. Because pressure sensors can degrade over time, they may occasionally produce
incorrect measurements. Therefore, the measurements need to be validated, which currently
requires manual work. We want to automate this process as much as possible by creating
an application that can automatically validate our measurements.

We envision the project as follows. The validation application becomes a separate
application alongside our existing WaterWeb platform. It will query WaterWeb for
measurements that require validation and request all relevant metadata (e.g. measuring tube
specifications). The application then processes and validates the data. Finally, the validation
results will be submitted back to WaterWeb.

As the final result of the project, we expect:

  • An extendable validation application
  • Usage of modern tooling (examples for Python include uv, ruff, Polars)
  • Multiple validation methods, including:
    o Static rule-based validations, such as:
        ▪ If value is higher than N and type = X, then it is invalid
        ▪ If value N higher than the previous value, then it is invalid
    o Statistical checks
    o Correlation-based checks, comparing water levels with ‘similar’ measurement
    points
    o Machine learning models (either supervised or unsupervised)
  • Explanation of the validation results
  • Integration with WaterWeb
  • If time permits: A client application that uses the validation service, including a
    front-end where users can configure validation rules and view results.

Technologies and Systems

Our existing platform, WaterWeb, is built using PHP Laravel and React. Because the
validation system will be a standalone application, it may be implemented in another
language. We consider Python a suitable choice due to its strong ecosystem for data
processing and analysis.

The validation application will need to communicate with WaterWeb. Together, we will agree
on an API specification.

Contact Details

AVS Control Hub

By Radboud Department Audiovisual Services

About the Company

Audiovisual Services is a department of the RU and provides multimedia products and services to staff, teachers and students at Radboud University and Radboudumc. These products and services contain rental and distribution of media equipment, technical organization and assistance with congresses and symposia and technical support for the use of AV equipment in teaching rooms among others.

About the Project

We currently use an application previously built by GIPhouse. This application allows AVS to remotely control all AV equipment in a classroom or meeting room, allowing us to assist users more quickly when problems arise in a room.
This application has served us well for several years, but we are now running into some limitations. The AV world is constantly evolving and technologically evolving as well. The application hasn't kept pace with this growth because it lacks official support from our IT department.
Furthermore, after extensive use, we've noticed that there are features that are definitely missing from the current application. 

The current application includes the following functionalities:

  • Remote control of AV control panels in classrooms and meeting rooms
  • A link (via IP) to the cameras in rooms, allowing us to monitor what's happening in a room
  • A link to the recording systems present in various rooms
  • User management. We can manage users ourselves
  • Adding new buildings and rooms and thus AV equipment

Additional requirements have arisen from an AV perspective:

  • Wake on LAN the weblecture recording boxes
  • The application should be written in .NET/C# using frontend Nuxt (We got this info from ILS. The goal is to let ILS manage the application in the future) 
  • Make camera access easier. Right now we need to manually typ in username and password
  • Override option to switch off all rooms in each building simultaneously
  • Email notification in case of equipment failure or unusual behavior (projector has been on for a long time, etc.) to detect problems.
  • Mobile version
  • Preventive and predictive maintenance based on usage hours
  • Use statistics -> which rooms are used most
  • We would like to add more equipment to a room. More and more devices are connected to the RU network.
  • Consider making the application more open ended so that we can customize more ourselves (little vague, but we’ll need to talk about how to handle this)
  • Possibly SurfConext user management

Technologies and Systems

We want to make the application as future-proof as possible. Ideally, the programming language should be the same as used at ILS (IT RU), so that we can eventually outsource the application to this department. This should be .NET/C# using frontend Nuxt.

The application opens AV control panels from various brands. The most common are Extron, Crestron, and AMX. Software from these manufacturers is required to operate these panels. The current application automatically opens the software from within the application.

We can show you the current application so you have a better understanding of the functionalities.

Contact Details

We both work Monday to Friday from 7:30 till 16:30.

Book Metadata Converter

By Radboud IT

About the Company

This project is carried out for Radboud University, Information & Library Services division, Applications team. The team is responsible for application development and management within Radboud University.

About the Project

We need a tool that pulls book metadata from the library and converts it into a custom format. The tool should also be able to fetch images from the university library (UB).

Technologies and Systems

These technologies are currently being used, but it is also allowed to work with other technologies:

  • .NET and C#
  • Razor
  • Database
  • RabbitMQ

Contact Details

The operational contact person and client for this assignment is Loes Hilhorst: loes.hilhorst@ru.nl.

CampusEnergie 3.0

By Radboud Campus & Facilities

About the Company

Radboud University is an university that emerged from the Catholic emancipation movement of the early twentieth century. The RU is guided by scientific questions and social challenges. The special identity of Radboud University gives employees, students and alumni the space to act in special ways. This takes shape in various institutes, centers and associations that are part of the broad Radboud community.

The Campus & Facilities division is responsible for the development and management of the facilities on the Radboud Campus. Under the motto 'Experience, connect, move, enjoy!', Campus & Facilities creates added value for all students, employees and visitors to the campus.

About the Project

'Campusenergie.nl' is a project that had an upgrade with GIPHouse in 2024 and we are ready for an upgrade for version 3.0 of 'campusenergie.nl'. We notice that there is an increasing interest in information about the energy consumption of the buildings on our campus. This information about energy consumption is currently shared via campusenergie.nl but has limited functionality.

The proposal is to add the following features:

  • new application with ErBisOne;
  • CO2 footprint;
  • Add occupancy as a pull-down option on the home screen;
  • Fix total usage of the campus button;
  • SPF-factor both instantaneously and over a period (month, year);
  • being able to create tasks with automatic e-mail function (for example give upper and lower limits);
  • alerts can be sent when a limit value is exceeded or fallen below;
  • be able to use dashboards (see for example image below).

Technologies and Systems

The available data is now retrieved from an (external) server and it will continue to function this way, only the name and the path of the server will probably change.

Contact Details

Contact person:
Ramón van Stijn
Email: ramon.vanstijn@ru.nl
Phone: +31 650172104

Conditional Independence Testing

By Computational Immunology Group

About the Company

This project is by the Computational Immunology Group at Radboud University Medical Center. One of our research areas is causal inference, and we maintain two widely used open-source software packages related to causal analysis:

  1. dagitty: An R package and a web app for drawing and analyzing causal diagrams.
  2. pgmpy: A Python library for causal inference and probabilistic modeling.

This project aims to improve an important component, Conditional Independence (CI) tests, used by both of these packages and by many other causal inference packages, such as bnlearn, pcalg, and causal-learn.

About the Project

In causal inference, a core problem statement is causal discovery, where the goal is to learn causal graphs from observational or interventional data. A major class of causal discovery algorithms (e.g., PC and Fast Causal Inference) relies on CI tests as a fundamental operation to construct the causal graph. These algorithms repeatedly evaluate CI test results on different sets of variables during graph construction. Since CI tests can be computationally expensive, they often become the performance bottlenecks for causal discovery algorithms, especially when applied to large datasets. Moreover, currently, each software package in the causal inference ecosystem tends to re-implement its own version of CI tests, resulting in duplicated effort, inconsistent interfaces, and increased maintenance burden.

The goal of this project is to design and implement a modular, high-performance open-source software package for CI tests. This package will reimplement the existing CI tests in pgmpy and dagitty in a lower-level language such as Rust, along with bindings for Python, R, and JavaScript to make the implementations language agnostic. The CI test implementations in this package should follow a Strategy + Registry design pattern, providing a unified interface to all CI tests. As new CI tests are constantly being developed in research, this design would also allow for the addition of new CI tests to the package in the future. Once the package and the bindings are developed, we plan to replace the existing implementations in pgmpy and dagitty with these new implementations. 

Key project objectives:

  1. Core Package: Reimplement existing CI tests from pgmpy and dagitty in Rust.
  2. Bindings: Provide bindings for the core package in R, Python, and JavaScript.
  3. Extensible Architecture: Use a modular design to support easy contribution and integration of future CI tests.

The core package, along with the bindings, will reduce redundancy across causal inference libraries, improve the scalability of causal discovery algorithms, and provide a foundation for future research and tool development.

Technologies and Systems

  • Core language: Rust preferred.
  • Bindings: Python (via PyO3), R (via extendr), and JavaScript (via wasm-pack).
  • Continuous Integration and Testing: GitHub Actions.

We are open to discussing the specific design choices and tooling with the development team. Alternatives may be considered if performance and extensibility are preserved.

Contact Details

  • Ankur Ankan
     Email: ankur.ankan@ru.nl
     Phone: +31 6 49150812
  • Johannes Textor
     Email: johannes.textor@ru.nl

Discworld Mobile Client

By Discworld MUD

About the Company

Discworld MUD is a large online game, started in the early nineties, based on the Discworld books by
Terry Pratchett. Hence, the level of graphics is representative for the time period:

The game is still under active development, and entirely run by volunteer coders. There are usually 50-80 players online at a time.

About the Project

The game uses a client-server architecture, where players use a client that connects to the server at
discworld.starturtle.net over telnet. However, these days more and more players want to play from mobile
phones or tablets. Unfortunately, there are not many good MUD clients for Android or iOS, and the top
Android one was recently discontinued and removed from the app store.

Hence, we would like you to build a new browser client optimised for mobile devices, which works both
on Android and iOS (and preferably on many different browsers). This client should be able to maintain a
connection to the MUD (even if it runs in the background for a while, or the device is turned off), have
customisable buttons for common actions, and allow players to store or mail themselves logs. Reach
goals include support for (small) maps, hotkeys (for PC users), and chat capture.

Technologies and Systems

There is no fixed technology required, but we recommend ionic with a websocket connection to a small server that maintains the telnet connection, and sqlite for persisting user data.

Contact Details

Richard Foster
Email: foster896@googlemail.com 
Phone: (+44) 7517 098183

Note that the client is not located in the Netherlands, so meetings will have to take place through videocalling.

eduSpec 3.0-II

By Education institute for Molecular Sciences

About the Project

Introduction: eduSpec 2.0

In the period 2011 – 2015, we developed the online learning environment eduSpec, to teach students in Molecular Sciences and related disciplines the principles of interpreting infrared (IR) spectra, nuclear magnetic resonance (NMR) spectra, and mass spectra (MS) by means of formative questions. eduSpec has been used satisfactorily by thousands of students over the years, but due to safety issues, we had to decide to disable the server in the beginning of 2024 (and therefore eduSpec). 

eduSpec 3.0

We decided to redevelop eduSpec from scratch. Between March and September 2025, we carried out a relatively small project with the following results:

  • We made eduSpec 2.0 available locally again as a containerized application in Docker, allowing it to be run offline on a PC or laptop for review. It also helps in accessing and – ultimately – copying all content.
  • We evaluated a number of options, such as Shiny, Jupyter, Mercury and Streamlit for creating interactive web pages using 'interpreter' programming languages such as R and Python.
  • We finally developed two ‘minimal working versions’ or ‘proofs of concept’ for a new platform, using Shiny and Streamlit and evaluated those implementations for simplicity and ease-of-use.
  • We concluded that Streamlit offers the required functionality, with a relative ‘clean’ implementation, that promises to be relatively easy to expand and maintain.
  • To wrap up, we will try to transfer the current implementation to a C&CZ-maintained VM.

We would like to proceed with this Streamlit implementation and have this developed into a full-fledged new version of eduSpec (version 3.0) that runs within the FNWI infrastructure, on a C&CZ-maintained VM. Below is a limited list of requirements that should give an idea what we are aiming for.

Requirements

The application must be able to 

  • display figures, interactive graphs of spectral data and interactive Jmol / JSmol structures
  • eduSpec3.0 must offer different types of questions, such as multiple-choice questions, questions that can be answered with a word or a number and, in particular:
        - questions that can be answered with a molecular structure (via SMILES, an ASCII representation of molecular structures) that is drawn interactively in a Molecular Editor. The current implementation uses the JSME for this.
        - questions that can be answered by selecting a peak in a spectrum.
  • In addition, it must be possible to formulate questions as JSON files in which the type of question is defined and that incorporate feedback to the user upon giving a right or wrong answer.
  • Spectral data must be stored in a human-readable format, including a few headers that specify the type of spectrum and other relevant information (compare JCAMP-DX).
  • For ease of use and/or robustness, a second molecular editor should be available, either as a selection option for users or as a back-up option for maintainers to easily swap, should the JSME cease to function at some moment.
  • It should be possible to take user input, process this using custom-made Python functions or scripts and display the outcome as a figure / interactive graph, etc.
  • It should be possible to display mathematical formulae and the like, using LaTeX-code or similar.
  • Depending on the progress of the project, it would be of interest to roll out Streamlit as a development platform for applications with a similar scope within the faculty. The project group could set this up.

Skills

You should have some experience with Python (or even with Streamlit), virtualization of software and be able to use GitLab for developing the new application.

Contact Details

  • Drs. Remco Aalbers, Employee FNWI Computer and Communication Affairs, Radboud University. 
  • Drs. Luuk van Summeren, Deputy Practical Coordinator Molecular Sciences, Education Institute Molecular Sciences, Radboud University.
  • Dr. Tom Bloemberg, Laboratory Coordinator Molecular Sciences, Education Institute Molecular Sciences, Radboud University.

Radboud University
Faculty of Science
Heyendaalseweg 135
6525 AJ Nijmegen

Email: tom.bloemberg@ru.nl

Phone: 024-3653452

Integrating Digital Pathology Tooling with QuP

By Computational Immunology Group

About the Company

This project is by the Computational Immunology Group at Radboud University Medical Center. Together with colleagues in the BioMedical Sciences department, we run an imaging platform based on the “Polaris” “immunohistochemistry” (= a type of microscopy) system that allows researchers to perform whole-slide imaging of tissue samples. We are using this imaging system in cancer research; you can watch a brief promotional video (in Dutch, with the popular singer “Do”) about this research here: https://www.youtube.com/watch?v=VHRHWJw6jcA and a more technical video here: https://app.jove.com/v/65717/author-spotlight-unlocking-insights-into-immune-cell-landscape


About the Project

For the analysis and quantification of immunohistochemistry (IHC) images, we have developed our own machine learning pipeline called “ImmuNet” (published here:  https://doi.org/10.1093/biomethods/bpae094). It takes significant effort to collect and review annotations for training this pipeline. To this end, we have developed our own web application consisting of a MongoDB/Flask/Vue.js stack. While our web application is reasonably powerful and easy to use, more powerful open source frameworks for digital pathology have emerged over the years. A particularly interesting software is QuPath (https://qupath.github.io), which provide a general-purpose user interface specific for common tasks in digital pathology (such as viewing huge multi-tile images and making annotations).

In this project, we would like to port the most important functionality of our own web-application frontend to QuPath. 

Key project objectives:

  1. Tiled image viewer: Build an interface to the QuPath tiled image viewer that allows users to directly browse and view the images provided in our current database.
  2. Annotation editor: Allow users to edit (create / update / modify / delete) point and region annotations for their images directly in QuPath, supporting the custom metadata we store for our annotations.

The software does not need to be released as open source, and we consider this an initial feasibility study from which we can learn the advantages and disadvantages of this approach going forward.

Technologies and Systems

  • Language used for extending QuPath: Java / Kotlin
  • Existing implementation: JavaScript / Vue.js framework, Python/flask
  • Database and deployment: Linux, MongoDB.

Contact Details

  • Johannes Textor
     Email: johannes.textor@ru.nl

IntelliGUI

By XLRIT B.V.

Introduction

We (XLRIT.com) have developed a new generation (6GL) software development tool called GEARS that
drastically increases software development speed. It achieves this by automating most manual design and
coding activities in a project. From business process design, to system design and of course creating source
code for the designed system. It does so purely based on clearly specified business results (a.k.a. business
requirements).

But automatically designing the user interface is a bit tricky, because a user interface is, just as a user, rather
subjective. And also many circumstances determine the best possible user interface. This is where IntelliGUI
comes in.

IntelliGUI

We propose to do this with an intelligent Graphical User Interface. In other words, IntelliGUI which has 3
parts:

  1. “Auto design”: screens are designed and created automatically at runtime based on the functionality
    of the task the user needs to perform. Or more to the point, based on the (type of) information the
    user will receive when performing a task, and the (type of) information the user will need to enter
    when performing the task.
  2. “User adaptation”: the users themselves are able to change the screen design at run-time easily,
    similar as in for instance Figma but the user can do this at runtime. For instance by putting the screen
    into "edit" mode and then changing the order and size of the widgets by using drag and drop,
    dragging in widgets for viewing a picture field or changing colors.
  3. “Auto adaptation”: screens are automatically changed depending on the situation.
    In the current operational version of GEARS, we already created “Auto design” but that is not suitable for User
    adaptation and Auto adaptation and therefore a new version needs to be created that is suitable and can
    provide all 3 parts.

Desired functionality

You will be creating a usable version of the IntelliGUI in a Web App and possibly matching Android/iOS apps.
The language or framework is not set, but it should be a mainstream language and framework that is well
supported. Ideally with Flutter because it supposedly supports building software that could run both as a web
app, as well as on native Windows, native Android and native iOS.

Auto design

This frontend will call an already existing GraphQL API to retrieve tasks and for each task its:

  • Read only data that should be shown in that task
  • Writable fields that should be entered and submitted to finish the task.

Both will include type information so you can render it correctly on the screen. E.g. a date will be rendered in
a date format, an image with an image widget and if a writable field is a multi line text field will be rendered as
a multi line resizable text box, etc.

The challenge here would be to already render a great good looking, fluent and responsive GUI out of the box. This implies having (or getting) good design skills but also making sure you use simple and lightweight
GUI components to do the job. Tip: using a big GUI framework may drag you down into a rabbit hole so make
wise decisions. We can help with that.

User adaptation

After the task is initially rendered in the working area of the app, it should be possible to put it in edit mode
after which the user can:

  • Reorder/Move/Resize the widgets (tip: it may be useful to use a fixed grid to prevent pixel-inperfect
    placement).
  • Do simple styling of widgets (e.g. change background color of a button or a text)
  • Add widgets to represent or input data. E.g. 2 decimal values could be mapped to a maps widget that
    could be dragged into the graphical user interface. And a list of datetime + numeric + numeric +
    numeric, etc. values could be mapped to a multi line graph, etc.
  • Etc .... as in, we may add more possibilities to change the screen, using other editors such as for
    instance Figma, or your own ideas as an example. In all cases we will choose the easiest possible
    ways to adapt the screen. Easy for the user, and if possible also easy for you to implement.
  • All changes must be saved in a database as part of a user profile so that if a user would log onto a
    different machine, the changes made will be there as well.
  • All changes should work well in both normal sized as well as smaller (mobile) sized screens. But if
    that is somehow impossible, it should be possible to store changes specific for a certain screen size.
    E.g. a difference between a mobile device and a normal laptop.

Auto adaptation

Basically Auto adaption works the same as Auto design, but now it will check a set of easy to change GUI
rules that may apply to all screens or only to this one. If a rule applies, it will use a specific Auto Design that is
attached to that rule and therefore adapt the GUI accordingly.


E.g.:

  • If there is a text field named "Address", then render a "maps" widget close to it to show the address in
    that maps widget. Ideally even link it so that changes in the maps widget will reflect in the value of the
    field
  • Similar for 2 decimal fields called latitude and longitude.
  • If there is a drop down field called "Member" and the user has the rights to start a process called
    "Create member" an extra [+] button is added next to the dropdown list box that invokes a URL to start
    that process.

Note that it is not needed to design a "GUI" language first. You can use whichever language is already
available from the base language/framework you are using, or a simple expression language that a separate
library could provide.

What is needed from you

Of course you should be smart, creative, innovative and not scared of a challenge. You should be skilled or
become skilled in creating good looking, fluent and responsive GUIs. We also understand that there is limited
time, which means that not all can be finished and maybe only a good start can be made. However, we like it
if you would try to achieve working results fast and balance this with maintainability.

The latter means that someone else should be able to take over to bring it to the next level and to be able to
do that without any guidance. E.g. just by starting to read the README.md.

Metabolic Dashboard

By Je Leefstijl Als Medicijn

Foundation Je Leefstijl Als Medicijn

Foundation Je Leefstijl Als Medicijn (Your Lifestyle As Medicine, JLAM) is a Dutch Public Benefit Organization (ANBI) [link]. Our mission is to reverse the pandemic of chronic diseases (such as diabetes, heart attacks, and strokes), mental health issues (such as anxiety and depression), and neurological disorders (such as dementia and Parkinson’s) by helping people to change their lifestyle. The healthcare system primarily treats symptoms without addressing one of their root causes: metabolic dysfunction from poor lifestyle choices (e.g., eating too much processed food). 

We target people with or without chronic conditions who seek to become and stay healthy and increase their health span. We offer information, practical tools, expertise of medical professionals, support groups and coaches who have reversed their own health. Our website attracts 700,000 visitors annually. We support 17,000 people in nine online support groups aimed to reverse their chronic conditions. We have launched the first AI lifestyle chatbot in the Netherlands, Lampie, and aim to grow rapidly to millions of visitors, enabling them to improve their health and well-being.  

Project My Lifestyle Platform Mobile App

My Lifestyle Platform is a citizen science platform for conducting N-of-1 experiments, allowing users to measure, analyse, and manage biomarkers to prevent and control health conditions. It enables individuals to compare results with peers and receive scientifically backed advice. This platform is ideal for those who wish to influence their health through lifestyle changes, especially in eating habits, rather than relying solely on limited-time interactions with doctors.

For example, imagine a type 2 diabetes patient: through our app, she tracks her biomarkers and food intake. The AI helps her recognize patterns, such as an excessive intake of carbohydrates, insufficient protein or fiber, or a structural vitamin B12 deficiency. Based on these insights, the AI provides her with suggestions for a meal plan, including recipes, tailored to her taste, that does contain sufficient macro- and micronutrients. Furthermore, she can see in the app the progression of food intake, biomarkers, and the disease in other (anonymous) users with an equivalent profile, and receive advice from the AI and expert coaches.

Currently, a foundation of the web application is developed, and in spring 2026 two student groups will work on the project from different universities. To avoid dependency between student groups, we decided to divide the requirements in the web app and the automated input of measurements by integration or upload.

The core features of automated input of measurements by integration or upload are:

  • Input by scanning of PDF lab results using open-source OCR/AI modules
  • Integration with Apple Health for activities, sleep, weight, calories, nutrition
  • Integration with Samsung Health for activities, sleep, weight, calories, nutrition
  • Integration with Withings for weight
  • Integration with Myfitnesspal for activities, calories, nutrition
  • Integration with Google Fit for activities, sleep, weight, calories, nutrition

Technologies and Systems

The My Lifestyle Platform uses a technology stack centered on Python with the FastAPI framework for its backend, connected to a PostgreSQL database. The user interfaces are built using React and MUI X for the web application and plan to use React Native Paper for future mobile apps, all hosted on the Scaleway European cloud platform. For authentication, it uses keycloak for Single Sign-On (SSO).

Based on our experience with working with student groups, we would like the student groups to work in our environment because they will add features to an existing system. Furthermore, we think two-weekly sprints are essential for a successful project.

Contact Details

Music Carrier Database

By Wouter van Orsouw & Jan Schoone

About the People

Wouter van Orsouw is a mathematics teacher and tutor for the bachelor Mathematics.

Jan Schoone is a PhD-candidate at Digital Security with a background in (theoretical) mathematics. We are both music enthusiasts and avid record collectors who are looking for a database to structure our collections.

About the Project

Music carrier database

For almost ten years we have been looking for a database that stores our music carrier information (CD’s, vinyl records, etc.) with some specific requirements. As it does not appear to exist in the form we would like, and we cannot make it ourselves, we apply to the GIPHouse project.

We are looking for a simple and easy-to-use User Interface with some requirements on the input fields for the database as well as certain outputs that can be automatically generated using the database. We will discuss the inputs and outputs in the following two sections. 

The UI can be designed almost entirely according to the team's preference, using any programming language they consider best. It will be much appreciated if the code is well-commented or explained so that smaller changes can be made by us later on.

Input section

 

 

 

 

 

 

 

 

 

For each carrier in the database, we would like to have the following input fields, with some specifications:

Artist: The name of the artist(s)/band(s) featured on the release.

When typing in the name, we would like a dropdown menu to open giving all artists that are already in the database, whose name contains the typed substring, reducing the dropdown list in size as more keystrokes are given. 

If the artist is not yet in the database, a new artist can be added.
The possibility to add multiple artists on one entry in the database is a must. (Think about compilation albums between different artists, e.g., soundtracks of movies.)

Title: The title of the recording.

 Year: The year of the recording/release (four digits)

Type #1: First type of the recording (“Album”, “Single”, “Extended Play” etc.) with a possibility for the user to add new types.

Type #2: The second type of the recording (“Studio”, “Live”, “Compilation” etc.) with a possibility for the user to add new types.

Format: The format on which the recording is released (“CD”, “LP”, “7””, etc.), with a possibility for the user to add new types.

Remark: An input field where the user can add an additional remark.

Output section
For the outputs the database should (at least) be able to provide the following information within the program:

A table of most-occurring artists in the database sorted from most to least occurring, starting with for example 20 artists, with possibility to make the list longer.
A table giving the number of entries for each decade in the database, subdivided by format.
Similar to the previous output, a table where Format and Type#1 are counted and presented.
An overview of all entries in the database from one artist. Title and Year will be enough to present in the list. Preferably the list is either clickable, (or with a pop-up) showing the other information about the entry in the database.
At the bottom of this list we would like to see the number of entries in the database where this artist is the “main” artist and below that a list of entries the artist “appears on”.

Lastly, we would like the database to be able to give a list of all entries (as a .txt of .pdf), formatted as: “Artist – Title (Year)” sorted first by Format (in different sections of the file), then by Artist (alphabetically) and then by Year (chronologically).


Technologies and Systems
There are no requirements to programming languages and materials used, except that the UI/Database should be able to operate on Windows-operated machines and the code is preferably such that minor changes can be made without too much difficulty.

Contact Details
Wouter van Orsouw – wouter.vanorsouw@ru.nl 

Jan Schoone – jan.schoone@ru.nl

Nifti

By HFML-FELIX

About the Company

HFML-FELIX is a large research facility at Radboud University. It consists of a laser department and a laboratory which houses very strong magnets. Fundamental physics research is done in the magnet's core. With Nifti we want to use the knowledge of magnetic fields to develop a levitated transport system. Nifti is a collaboration between Radboud university and HAN University of Applied Sciences (Automotive). Nifti stands for: National individual floating transport infrastructure.

Nifti is a silent, sustainable and inclusive transport system that uses magnetic levitation for propulsion. A series of electromagnetic coils, embedded in the road repel permanent magnets that are placed in a base, which can be used for personal or freight transport. A first prototype is made which proves the working principle, on a 1:10 scale, straight track. The system is controlled by a Python based program which is made in collaboration with the HAN Automotive faculty. 

About the Project

The goal is to build a scheduling and visualization system for a simulated Nifti network. Given a graph-based track network and a set of pods, the system should:

  • Schedule routes: Determine which pods move where and when, with support for different optimization strategies (e.g., maximize throughput, minimize average transit time, minimize longest transit time)
  • Visualize the network: Display the track layout and real-time pod movement
  • Support interactive exploration: Allow modifications to the network, pod requests, or routing strategies to explore different scenarios

This tool will be used to demonstrate Nifti's potential applications and explore how the system could behave under different conditions and constraints. The network size will be limited—this is an exploratory project focused on proving concepts rather than large-scale simulation.

Technologies and Systems

Students are free to choose their technology stack. The visualization could be implemented first in 2D (perhaps, web-based) and later perhaps in 3D (e.g., using Unity engine to simulate moving vehicles).

Contact Details

Gerben Wulterkens; Project coordinator; gerben.wulterkens@ru.nl; 06-50052305

Frank Berndsen; Researcher HAN; f.berndsen@han.nl

Portal Genius SAAS Webapplication

By The Right Direction

About The Right Direction

The Right Direction is a small company of four people, now active for around almost 6 years in the field of geospatial software development and primarily active at customers with Esri ArcGIS-software. We do basically only custom software for those customers. Beside the custom software development we do build and sell two products to our customers, KLIC Genius and Portal Genius. In November 2024 we won the GeoInfo Nederland GeoPrestige Award.

About the Project

Portal Genius is our coe and most important product of the Right Direction. We serve big customers with this software like the Dutch Army and Rijkswaterstaat. It all started out with a desktop-product only. But we are on the edge to overhaul the product and switch to a desktop-server approach.

In this switch to a client server we would like the students to rebuild a part of the product into a web application. That means building a frontend with Angular and Material and a backend based on C# minimal API and Postgresql. 

Besides that we would like to enhance the product with a search engine. A configurable search engine where we can search on text but also spatial geometries. So we need to harvest some specific data as part of the API.

The students will also get the opportunity to write the scripts to package the whole product so we can do a next next finish-installation at customers.

We have to define what will be the exact scope for the project, because the whole project for us is bigger than the Giphouse-period.

Technologies and Systems

Our stack is built up with: C#, Postgres and Angular with the ArcGIS Maps SDK

Contact Details

PubHubs – Calendar

By PubHubs

About PubHubs

PubHubs is a community platform based on public values, it provides a safe online space for public organisations where they and their communities can interact without handing over all kinds of information to commercial parties.

The platform consists of several hubs, communities of a public organisation such as a library, school, patient organisation, a local sports club or broadcaster. Within a hub there are multiple rooms. A room can be a chat, a forum or anything else that uses the Matrix protocol event system. Rooms can be open rooms - accessible to everyone - or secured rooms. Secured rooms are restricted to users with specific attributes, for example users over 18, users with a verified email address or users whose postal code matches a defined list.

The PubHubs infrastructure consists of a central server and several hub servers. The central server is responsible for authenticating end-users and keeping track of connected hubs, but it has no knowledge of the activities within the hubs. On the other hand, hub servers have no knowledge of user details - beyond their existence - nor of other hubs. Each hub server is operated by its own organisation, which is responsible for user content and conduct. Currently hub servers are implemented as adapted Matrix home servers. PubHubs uses the Yivi app (www.yivi.app) for authenticating and disclosing attributes (for secured rooms).

PubHubs has its own web client, which consists of two main parts:

  • The Global Client, served by the PubHubs central server, responsible for global authentication and hub switching.
  • A Hub Client, served separately by every hub server and embedded within an iframe of the Global Client.

This project focuses on the Hub client, specifically developing an extension to it.

Project: PubHubs Calendar

As mentioned earlier the rooms in PubHubs can be of several types: chats, files library, forum. Each type has its own functionality and use-case. In this project we would like to add a new type of room that functions like a calendar.

Hubs and their rooms in PubHubs represent organisations and their communities. These groups often hold regular meetings, events and appointments. PubHubs currently provides spaces for communication, but there is no integrated way to schedule and share these events.

We would like to have a calendar per hub: a shared calendar for the entire organisation, accessible to all members. There could be meetings/events on various  levels: either for the whole hub or for a specific room of the hub and then only visible to the members of these rooms.

Functionality:

  • Users should be able to create events and later edit or remove their own events.
  • Users should see all the events for all the rooms they are subscribed to per specific hub.
  • Users should be able to download events to include them into their personal calendars without revealing personal data.
  • Perhaps it would be useful to let users invite other users when adding an appointment, if so: let the other user accept and register themselves for the meeting.
  • When the video-calls in PubHubs are implemented, invites for these could also be added and the video call could be started from the calendar.

Techstack

For building the hub client of PubHubs the following stack is used:

  • TypeScript
  • VueJS 3 (with Pinia)
  • Tailwind (CSS)

Hubs use the matrix protocol (www.matrix.org), so you’ll have to dive into the Matrix specification, as well as how it is implemented in the PubHubs client.

The PubHubs source with documentation is available on GitHub here https://github.com/PubHubs/PubHubs.

A working demo of PubHubs, feel free to register: https://app.pubhubs.net 

Contact Details

  • First contact: Frans Lammers – Lead Developer of PubHubs
    Email: frans.lammers@ru.nl 
    (working days: Monday-Thursday)
    Location: Erasmusplein 1, room 19.22
  • PubHubs general contact:
    E-mail: contact@pubhubs.net
    Website: www.pubhubs.net

RapidReport: an AI-driven radiology reporting

By Plain Medical

About the Company

Plain Medical (plain-medical.com) is a radiology AI startup with offices in Nijmegen, Bremen and Berlin. Our main office is in Mercator 2, Toernooiveld 300 in Nijmegen. We analyze radiological studies with deep learning to find all abnormalities and from this analysis we construct a draft radiology report. We collaborate with various hospitals and have a large database with anonymized radiology studies and radiology reports. A team of medical analysts and radiologists annotate the anatomical structures and abnormalities on these scans, and that annotated data is used to train iterations of our networks.  

About the Project

We use a special viewing/annotation environment for the radiology studies. This environment is built upon CuraMate, a platform developed by Fraunhofer MEVIS in Bremen. It runs in a browser and is built with Quasar (https://quasar.dev/). Analysts use this to annotate abnormalities and correct output of our deep learning models. It runs locally on servers in our office. A screenshot is shown below:

We further develop all functionality for viewing and annotation in this platform. A limitation of this platform is that it does not support multiple tabs/monitor (as of yet). We are therefore looking for a second application that would run on a second monitor, in another tab for displaying and editing and dictating (with speech recognition) the radiology reports. 

In this project, you will develop a prototype of that application. This application should 

  • Run in a browser, ideally using Quasar to achieve a similar look and feel as our viewing/annotation software
  • Connect to a database of radiology reports and tags of the corresponding radiology studies
  • Display reports in different way, eg showing overview of patients, dates of studies and then the report for one or two studies (the current and prior), with a timeline, so reports from different dates can be selected
  • Send commands to the CuraMate tab to display certain studies. CuraMate provides the possibility to do this via extensions.
  • Provide functionality to search efficiently in the database of radiology reports (with around half a million reports) using classic (regex) search and open-source LLMs. We currently do this in a small separate stand-alone application, with keyword search, regex, and local LLMs via https://ollama.com, but we have ideas how to further optimize this, eg including a prompt library, make it easy to switch between models, run batch jobs to go through many reports. We constantly use this type of search/data mining operation to select cases with specific types of abnormalities that we want to focus on in our next annotation studies. Ideally our lead analysts and radiologists can use your application to search for studies to annotate, based on report contents
  • Provide dictation possibilities. Radiologists are used to using dictation. With modern free speech recognition models, we achieved promising results. We tested Voxtral (https://mistral.ai/news/voxtral) but possibly other open-source solutions may work better.

Technologies and Systems

We are open to your input regarding technologies. We’d like to run the platform you develop on the same Linux server where we run our CuraMate annotation process, using Docker containers. We also run Docker containers for the LLMs and speech recognition models. The servers have a powerful GPU. The code should be set up in such a way that deployment on AWS is an easy next step, but for this project we likely use our in-house servers and we can provide the team access to those via VPN. We develop our code on GitHub.

Contact Details

  • Bram van Ginneken
    Email: bram@plain-medical.com.
    I prefer communication via e-mail but if needed my phone number is +31 6 14021323.

 

Upgrade of ToDI

By Radboud Department of Language and Communication

About the Company

https://test.todi.cls.ru.nl/

Client: The Department of Language and Communication, Faculty of Arts,
Radboud University carries out teaching and research in the fields of language,
communication and information. Besides fundamental research, it has several
development projects involving digital applications for teaching and learning.

About the Project

Current program: Transcription of Dutch Intonation is a web-based interactive
course aimed at teaching students how to transcribe the sentence melody
(intonation) of Dutch sentences. Such transcriptions specify what intonation a
sentence was spoken with and can be used to reproduce the intonation
artificially, as is done in this program. Briefly, TODI provides explanations of
specific aspects of Dutch intonation. At approximately regular intervals, exercises
are presented for students to check their practical ability to assign intonation
labels to sentences. An exercise contains the sound files of 12 sentences, each of
which is presented as a written sentence, together with a PLAY button as well as
pull-down menus below particular words from which the student must choose
the correct intonation label. The student can play the original sound file and
compare its intonation with the artificial intonation for the same sentence as
produced with the help of the chosen labels (RESYNTHESIZE and PLAY
RESYNTHESIS). The student can also request the KEY and perform the resynthesis
on that basis (see the appended screenshot). In 2023, a Giphouse project
successfully upgraded the front end and reprogrammed the interactive modules
of the older version in React. It is available at test.todi.cls.ru.nl.

The problem:  The TODI course is written in English, while the contents are Dutch.
We are currently in touch with Kristine Yu (UMASS, Amherst) en Cong Zhang
(Newcastle University, UK) about possible AmE and BrE versions. Since the
intonation grammars of English and Dutch are very similar, the program’s
contents could in principle be replaced with British English or American English
materials. To facilitate the construction of an English “TODI”, there is a need for an editing tool prompting the program editor for the required information, either
to construct the sentences in an exercise, in which all prompts are active, or edit
existing exercises, in which all prompts appear with existing data, to be altered as
needed. The design of this editing tool might be made available as a single web
page. The result should be that program editors can replace an exercise in full or
in part, down to the replacement of a single tone symbol, say.

The work schedule includes a detailed evaluation of the existing product. Aspects
that will be reviewed concern the division of the course into an elementary and
an advanced part, the visualization of the resynthesis contour (pitch graph) in a
single window with the original pitch graph, and the values for the custom
parameter options available to the student for experimenting with different pitch
ranges of the artificial intonation contour.

Contact Details