IOI 2023 was hosted by Hungary. Visit the official website to learn more.
Just finished IOI 2023 in Hungary
Interview: Martin Mares
Martin Mares is the chair of the International Technical Committee, which oversees the technical side of the IOI: program development tools, evaluation software, hardware, networks and their security.
Martin is also a three time gold medal winner at the International Olympiad in Informatics, and a former team leader for the Czech Republic. Some of the topics that will be discussed in the interview will include the work of the ITC, contest systems, and his experience as a former competitive programmer.
1. Can you give us a brief introduction about yourself (where are you from, where do you work/teach, etc…)
I live in the city of Prague in Czech Republic. I currently combine research and teaching at the Charles University in Prague with a bit of freelance programming and a lot of organizing of programming contests.
2. When did you start being interested in programming? (do you remember at what age, what programming language, maybe you started learning programming at school, etc)
Oh, that’s a long time ago… I started playing with computers in 1987 when our neighbor bought Sinclair ZX-Spectrum (a wonderfully simple 8-bit home computer). It turned out to be a wonderful toy, especially because there was almost no software available in our country at that time, so we had to do everything by ourselves. I was 11 years old, but in a couple of months, I learned Spectrum’s dialect of BASIC. But it was obvious that BASIC is too slow for many interesting things, so I switched to assembly language quite soon. It was the real pioneers’ era: close to no documentation was available, we learned a lot by experimentation. In fact, it was years before our school got its first computer :)
3. A lot of newer contestants don’t know this, but you are a three time gold medal winner at the International Olympiad in Informatics. Can you tell us something more about your experience with the IOI as a contestant?
It was a very different IOI from the contemporary ones. We programmed under MS-DOS, there was no network. At the end of the contest, we handed in our solutions on a floppy disk. Later, we met at the computer with our (human) evaluator, who carefully copied the test data from another floppy, run our program, compared the outputs. The time limit was measured on a stopwatch and if the program exceeded it too much, the machine had to be rebooted. From today’s point of view, it was like the stone age. But still, the spirit of the tasks was very similar.
4. What keeps you interested in competitive programming and participating at international competitions as an organizer? (do you like meeting new people, discussing new stuff/algorithms/data structures, etc)
All of that together. I like solving puzzles. I enjoy discussing the beauty of the puzzles with other people. This includes not only algorithmic puzzles, but also mysteries of compilers, operating systems, and even computer hardware. From the organizer’s point of view, every IOI is unique. One year, it was a big challenge to design test data which defeats all heuristics the contestants could come up with. Another year, we had a solution whose run time was significantly different on two seemingly identical machines. Yet another year, we solved mysterious networking issues.
5. You are the chair of the International Technical Committee at the IOI. Can you briefly describe what the ITC does and what its role is within the IOI?
The ITC keeps its eyes on all technical issues around the contest: computers, networks, compilers, the contest system, various monitoring systems, and so on. We are there to help the host and the ISC with all technical mysteries and make sure that everything runs smoothly. Together with the other IOI committees, we are the “long-term memory” of the olympiad: we remember the past mistakes and past successes and what lead to them.
6. What tools and software packages does the ITC develop and maintain, and are they available for free to the entire IOI community for use during their regional/national competitions and training camps?
Members of the ITC are active in several software projects used in programming contests. Among others:
- CMS - the contest management system, originally developed for the IOI in Italy and still used and extended. This is what the contestants use to submit their solutions and have them evaluated.
- Isolate - a sandbox for secure execution of programs, so that they are isolated from the rest of the system and subject to time and memory limits. It is used by the CMS, but also by other contest systems.
- The IOI translation system, which handles translation of competition tasks to native languages of all the teams.
- The technical checklist - not a piece of software, but a document listing many things which should be done before the contest starts to avoid known problems. Behind every sentence, there is a colorful story :)
All this is available to the community at https://itc.ioinformatics.org/. Some of the programs are probably specific to the IOI, but many can be useful for organizers of other contests.
7. Some online judges aim to support as many programming languages as possible (20+), which is different from what the IOI aims to do. How do you see this changing in the future - will the IOI support more languages or will it focus on a handful of languages in order to allow for more specific and interesting tasks?
There are multiple reasons for that. First of all, the IOI tries to maintain a very high standard of quality, which means that every combination of a task and a language has to be tested. Second, instead of reading the input from a file and writing the output to another file, tasks of recent IOIs typically use a function-based interface to communicate with the grader. This is often believed to simplify the contestants’ work, especially in interactive tasks. However, it also means that there has to be a grader implemented for every language. All this put together, there is a non-trivial amount of work for supporting a programming language and the resources of the organizers are limited.
Furthermore, once you enable multiple languages with significantly different properties, various issues of fairness spring out. A nice example is Python: for most tasks, it is impossible to write a Python program which solves all the test cases in the given time limit. Increasing the time limit does not help, because it would make algorithmically inefficient programs implemented in fast languages pass. Using a different time limit for Python opens up many ways how to misuse the system: for example, you can compile a C program and embed the resulting binary in a Python program. From the point of view of the contest system, it is written in Python and subject to Python time limits, but really it runs much faster than in pure Python. We can call this cheating and try to forbid it in the competition rules, but it is terribly hard to define a reasonable boundary. Sometimes, it is proposed to add further languages as second-class, meaning that it will not be guaranteed that full score can be achieved in them. Still, there can be a weaker guarantee like that the first 2 subtasks of every task can be solved in all languages. But it changes the spirit of the contest to some extent, so it is a contentious topic.
8. Can you briefly describe the way IOI participants are selected in the Czech Republic? (how many competitions there are, are they online/onsite, maybe you know how many high school students take part in them...)
Our national programming contest started as a part of the olympiad in mathematics (and it still formally is). We continue in this spirit: our contest combines practical programming tasks with theoretical tasks where the goal is to design an algorithm for a given problem, describe it and prove its properties. The contest has three rounds: in the first round, the students have several weeks to solve a problem set at home. The second round is the regional round: in each region, the contestants gather to solve 4 theoretical problems. And the best of these advance to the national finals, which have two problem solving days: one with theoretical tasks, the other with practical ones. The IOI participants are selected from the best finalists at a weekend-long selection camp. By the way, even though the country of Czechoslovakia has dissolved years ago, we still prepare the tasks together with our Slovak colleagues, so both countries run separate instances of the same contest.
9. Do you have a favorite IOI task and which one is it (and why)? How about a task so specific that it took the ITC committee a long time to implement/support, but you still decided to do it because it was worth it?
Actually, the answer to both questions is the same :) My favorite tasks are the non-standard ones, which require more general problem-solving skills than hard training in textbook techniques. These include Art class from IOI 2013 (even though technical problems during the contest made the task less pleasant) and Saveit from IOI 2010 (based on ideas from communication complexity).
10. As someone who is involved with organising programming competitions and supporting them from a technical role, what do you think about competitions where you get a result for your submissions immediately (like the IOI), versus competitions where you don’t know if your submission is correct or not until the competition has ended? Will the IOI continue to offer participants feedback and are there any discussions on modifying that rule?
I strongly believe that some feedback is important, because it makes the final scores less sensitive to stupid mistakes in otherwise good solutions. Maybe I am somewhat biased because as a contestant at IOI 1995, I lost most of the points in one task due to a silly 1-bit mistake in formatting the output :) Still, it is not clear what the optimal amount of feedback is - for example, in some tasks complete feedback could reveal too much information about the test data. The amount of feedback is still evolving: we no longer provide feedback on all test cases, just the first error in every subtask. And the evolution is likely to continue in the next few years.
There are good arguments both for and against broad feedback. Feedback can reduce stress of the contestants as they can stop worrying about the tasks they finished, knowing that the solution is correct. On the other hand, feedback emphasizes the online aspects of the contest, which makes temporary technical problems in the contest system much more serious that in feedback-less contests.
11. Several people from the IOI community who visit our website are involved with organizing regional and national competitions in informatics. As someone with extensive experience with programming competitions and events, what is your recommendation to them in order to keep participants happy and make sure the events are run smoothly?
(see my answer to question #6)
12. Has the ITC researched the use of cloud computing platforms like Amazon Web Services or Google Cloud Platform for running and testing user submissions, and is that something that you would recommend to contest organisers?
Cloud computing work great … except in the cases it doesn’t and it is really hard to recognize those :) More seriously: in most clouds, virtual machines are subject to lots of outside influence. For example, performance of one virtual machine (running on a single core of a physical machine) depends on the other VMs running on other cores. This increases variance of time measurements and since the behavior of the other VMs is not random, the measurement errors can be systematic. Even worse, the hypervisor can decide to perform an online migration of your VM to another host, in which situation the performance goes completely awry. Of course, most of the time all you get is just a little more random noise than on a normal machine, but sometimes you don’t. You can probably ignore this in a small contest or in a long-running one where the contestants can easily re-submit solutions if they get suspicious results.
Overall, it would be great to do a thorough analysis, which would measure the performance issues over several months of run time - during a long time window the low-probability-but-high-fallout events are likely to happen. But as far as I know, nobody did that yet.
13. What do you do in your free time? Do you have any hobbies - sports, watching movies/TV, computer games, contributing to open-source software ... ?
Free time? Never had one :) I enjoy being busy most of the time. I like writing both software and books. I enjoy reading, music, and good tea. I love hiking in the hills and sleeping under the sky full of stars. And I am an enthusiastic participant and organizer of puzzle hunt contests.