If a,b are in different segments, it can be calculated directly.
If a,b are in the same segment, the expectation is \dfrac{\text{segment length}}{3}.
Time complexity: \mathcal O(n\log n).
B. Binary Substrings
Difficulty factor: 8.
Quality factor: 7.
First, for a binary string of length k, S contains at most \min(2^k,n-k+1) distinct strings. Intuition tells us to take equality here.
Consider first that if there exists an integer k satisfying n=2^k+k-1, then our task is to make two of the n\tt 01 strings of length k different, such that the sequence is the \text{de Bruijn} sequence. The construction method is to construct a graph of 2^{k-1} points, and for a \tt 01 string of length 2^k, connect the edges of the two points represented by the first k-1 string and the second k-1 string. Then the \text{de Bruijn} sequence is essentially the Euler loop of this graph. It's straightforward to run.
Now consider 2^k+k-1\leq n<2^{k+1}+k. We first construct a \text{de Bruijn} sequence of length 2^k+k-1. Then that graph of 2^k points is actually a Hamiltonian path of the graph of 2^{k+1}, denoted u_1,u_2,\dots,u_{2^k}. Now that u_i\to u_{i+1} has been given a concatenation of edges, we give another concatenation of edges to (u_i\text{ XOR }2^k)\to (u_{i+1}\text{ XOR }1), and it's easy to show that none of these dots will be present in u_i.
Now we have what amounts to a Hamiltonian path on the graph of 2^{k+1}, and then also divide the remaining edges also into rings. We just need to greedily add rings to this path.
Time complexity: \mathcal O(n).
C. Clamped Sequence
Difficulty factor: 1.
Quality factor: 3.
Time complexity: $\mathcal O(n^2)$.
## D. DRX vs. T1
- Difficulty factor: $0.5$.
- Quality factor: $1$.
Introductory question.
Time Complexity: $\mathcal O(1)$.
## E. Graph Completing
- Difficulty factor: $6$.
- Quality factor: $7$.
Consider shrinking the point first, then consider the contribution made by an extra edge $(u,v)$:
- If $u,v$ is inside a $\text{scc}$ to begin with. If $u,v$ is within a $\text{scc}$ to begin with, then there is no contribution.
- If $u,v$ is not within a $\text{scc}$, then this edge overwrites the edge between $scc_u,scc_v$ in the contraction tree.
The final graph is connected if and only if every edge is covered.
We only need to care about the shrinking tree. Every edge is covered is hard to compute, consider tolerance and specify that some edges can't be covered, we break these edges. Then the extra edges can only be connected on top of a connected chunk of a contraction tree, which we can simply compute by dp, and we include the tolerance coefficients in the process of dp.
Suppose $f_{u,i}$ denotes the tolerance of $u$ in a subtree of $i$ with $u$ in a connected chunk of size $i$. Each time a $v$ enumeration is merged, enumerate whether the edge of $u,v$ is broken, multiply by $-1$ if it is broken, otherwise count the contribution of $2^{i+j}$.
Time complexity: $\mathcal O(n^2)$.
## F. Half Mixed
- Difficulty factor: $3$.
- Quality factor: $5$.
If the number of submatrices is odd it's done.
Otherwise you can solve by squashing the matrix into a row or column. Just take it greedily each time.
Time complexity: $\mathcal O(nm)$.
## G. Meet in the Middle
- Difficulty factor: $7$.
- Quality factor: $5$.
Consider a point partition of a tree, hang the subtrees in the point partition tree on a second tree to build an imaginary tree and then switch the roots to calculate the furthest distance of each point and the next furthest distance of a point that is different from the furthest distance.
Time complexity: $\mathcal O((n+q)\log^2n)$.
## H. P-P-Palindrome
- Difficulty factor: $6$.
- Quality factor: $6$.
It is easy to show that $P,Q$ are all shaped like $n$ $s$ tessellations, $s$ are palindromes, and $P,Q$ correspond to the same $s$.
We build a generalised PAM and calculate the maximum repetition length of each palindrome.
The way to build a generalised PAM is to zero out `cur` after one string and move on to the next.
Time complexity: $\mathcal O((\sum|s_i|)\Sigma)$.
## I. Quartz Collection
- Difficulty factor: $6$.
- Quality factor: $6$.
The sum is fixed by putting $(a_i,b_i)\leftarrow (a_i-b_i,0)$. Then the first and second hands are obtained quite regularly, and are casually maintained with data structures.
Time complexity: $\mathcal O((n+q)\log n)$.
## J. Referee Without Red
- Difficulty factor: $9$.
- Quality factor: $10$.
This kind of problem is usually just a matter of guessing some properties that feel right.
Firstly that rearrangement of the original question can be generalised to rearrange a row or a column in a given arrangement. We pull out the rings of each arrangement and line them up together, which is equivalent to giving the rows and columns neighbouring elements in a group. Operating on a row is equivalent to cyclically translating the group of all the columns in that row. Here is a possible simplified composition:

Consider a group of rows that has a size of $1$, which brings in a contribution of $\text{lcm}$ from the intrinsically different scheme of all the grids in that row. Intrinsically different schemes can be computed using KMP with linear complexity.
We start by considering only one block's, which is required to have both length and width greater than or equal to $2$. We can shift its rows and columns cyclically, we can easily achieve the operation of exchanging adjacent bits, and then we can just row.
It would be wrong to just guess that the chunks are randomly handed over, and we must have some conditions that we haven't considered, because the rows and columns are operated in parallel. Let's start by examining the case where the elements in this block are two-by-two different:
> Classical conclusion 1: A cyclic displacement of the sequence $n$ is equivalent to swapping $n-1$ times.
>
> Classical conclusion 2: Swapping two elements of a sequence changes the logarithmic parity of the reverse order of the sequence.
This reveals that if we arrange the elements in this block in any order, then if the row length is even, the operation on the column block changes the parity of the block's inverse pair, and the row block similarly.
So the parity of each block's reverse order pair is related to the number of operations on the rows and columns. One can think of abstracting a block into an edge, connected to a bipartite graph (if the rows don't contribute to the inverse order pairs they are connected to $0$, columns are the same). Then a point can choose parity, and the value of the edge is the dissimilarity of the point. The problem is sequences of essentially different edge weights.
This problem is usual, we pull out dfs for each connected block to generate a tree, then the tree edges are randomly adjusted, non-tree edges can be represented by tree edges. It's done.
Time complexity: $\mathcal O(nm)$.
## K. Security at Museums
- Difficulty factor: $\text{unavailable}$.
- Quality factor: $\text{unavailable}$.
I am not good at computational geometry.
## L. Tavern Chess
- Difficulty factor: $1.5$.
- Quality factor: $0.5$.
$\text{taken attacks}$ What the hell does that mean?
Time complexity: $\mathcal O((n!)^2)$.
## M. Vulpecula
- Difficulty factor: $10$.
- Quality factor: $10$.
Tech Question. The difficulty of this question is the merging of linear bases.
First assume that the capital letters $A,B,C,\dots$ etc. denote linear bases, that is, vectors that are linearly uncorrelated in $\text{Z}_n^2$. Suppose $\text{span}(A)$ denotes the space into which the linear basis $A$ is tensored. $A=B$ if and only if $\text{span}(A)=\text{span}(B)$.
The addition of the following vectors $+$ is a non-incremental addition, i.e., an iso-or. $i,j\in \text{Z}_n^2$ is orthogonal if and only if $\text{popcount}(i+j)\bmod 2=0$.
We define an orthogonal linear basis $A^\perp$ to a linear basis $A$ to denote the set of $\text{span}(A)$ elements and $\text{span}(A^\perp)$ elements that are both orthogonal and $\rank(A)+\rank(A^\perp)=n$. We then dig into some elegant properties of $A^⊥$.
First consider the construction of $A^\perp$. We let the $\text{highbit}$ of each element in $A$ be two different, and write all the $\text{highbit}$ as key bits.
The process of constructing $A^\perp$ is as follows, noting that $A^\perp$ constructed by the following algorithm satisfies $\textbf{lowbit}$ **two different**.
- For each non-critical bit $i$ of $A$, build an element of $\text{lowbit}=i$. Consider each $j>i$ case, and consider what the $2^j$ bits are, we just need to choose the right $01$ bits such that $A^\perp_i\perp A_j$.
It can be shown that orthogonality is multiplicative. That is, $a,b⊥ c$ then $ab⊥c$.
It is easy to see that every base of $A^\perp$ is orthogonal to the base of $A$, then $\text{span}(A^\perp)\perp\text{span}(A)$.
We constructed out $A^\perp$, what does this do?
- Lemma $1$: $S=\text{span}(A)$. Then the coefficients of each term of $\hat S$ are either $0$ or $2^k$, where $\hat S$ denotes the result of $S$ undergoing $\text{FWTXOR}$ (Fast Walsh Transform). Assume $k=\rank(A)$.
Proof: $S*S=2^k\times S$ (the left side is a heterogeneous convolution, the right side is a number multiplication. This proof is obvious)
Thus $\hat S \times \hat S=2^k\times \hat S$ (the left side is the corresponding bitwise multiplication)
The coefficients of the terms of $\hat S$ are then $0$ or $2^k$.
- Important Corollary: $\hat S=2^{k}\text{span}(S^\perp)
Proof: [x^t]\hat S=\sum _{s\in S} (-1)^{\text{popcount(s+t)}}, can only have 2^k coefficients if t is orthogonal to each s. That is, t\in \text{span}(S^\perp).
Important application: \text{span}(A)∩\text{span}(B)=\text{span}((A^\perp∪B^\perp)^\perp).
Proof: our derivation below ignores constants, i.e., it only distinguishes 01.
By definition, it is known that A∩ B=(A^\perp∪B^\perp)^\perp.
So we have \mathcal O(n^2) for linear basis merging.
Returning to the question, we just need to maintain the time of the earliest addition of each element to the linear base, sweeping twice from bottom to top to bottom. The query just needs to be greedy in the orthogonal sense.