22.1 Representations of graphs
22.11¶
Given an adjacencylist representation of a directed graph, how long does it take to compute the $\text{outdegree}$ of every vertex? How long does it take to compute the $\text{indegree}$s?

The time to compute the $\text{outdegree}$ of every vertex is
$$\sum_{v \in V}O(\text{outdegree}(v)) = O(E + V),$$
which is straightforward.

As for the $\text{indegree}$, we have to scan through all adjacency lists and keep counters for how many times each vertex has been pointed to. Thus, the time complexity is also $O(E + V)$ because we'll visit all nodes and edges.
22.12¶
Give an adjacencylist representation for a complete binary tree on $7$ vertices. Give an equivalent adjacencymatrix representation. Assume that vertices are numbered from $1$ to $7$ as in a binary heap.

Adjacencylist representation
$$ \begin{aligned} 1 & \to 2 \to 3 \\ 2 & \to 1 \to 4 \to 5 \\ 3 & \to 1 \to 6 \to 7 \\ 4 & \to 2 \\ 5 & \to 2 \\ 6 & \to 3 \\ 7 & \to 3 \end{aligned} $$

Adjacencymatrix representation
$$ \begin{array}{cccccccc} & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline 1 & 0 & 1 & 1 & 0 & 0 & 0 & 0 \\ 2 & 1 & 0 & 0 & 1 & 1 & 0 & 0 \\ 3 & 1 & 0 & 0 & 0 & 0 & 1 & 1 \\ 4 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 5 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 6 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 7 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ \hline \end{array} $$
22.13¶
The transpose of a directed graph $G = (V, E)$ is the graph $G^\text T = (V, E^\text T)$, where $E^\text T = \{(v, u) \in V \times V: (u, v) \in E \}$. Thus, $G^\text T$ is $G$ with all its edges reversed. Describe efficient algorithms for computing $G^\text T$ from $G$, for both the adjacencylist and adjacencymatrix representations of $G$. Analyze the running times of your algorithms.

Adjacencylist representation
Assume the original adjacency list is $Adj$.
let Adj'[1..V] be a new adjacency list of the transposed G^T for each vertex u ∈ G.V for each vertex v ∈ Adj[u] INSERT(Adj'[v], u)
Time complexity: $O(E + V)$.

Adjacencymatrix representation
Transpose the original matrix by looking along every entry above the diagonal, and swapping it with the entry that occurs below the diagonal.
Time complexity: $O(V^2)$.
22.14¶
Given an adjacencylist representation of a multigraph $G = (V, E)$, describe an $O(V + E)$time algorithm to compute the adjacencylist representation of the "equivalent" undirected graph $G' = (V, E')$, where $E'$ consists of the edges in $E$ with all multiple edges between two vertices replaced by a single edge and with all selfloops removed.
EQUIVALENTUNDIRECTEDGRAPH
let Adj'[1..V] be a new adjacency list
let A be a 0initialized array of size V
for each vertex u ∈ G.V
for each v ∈ Adj[u]
if v != u && A[v] != u
A[v] = u
INSERT(Adj'[u], v)
Note that $A$ does not contain any element with value $u$ before each iteration of the inner forloop. That's why we use $A[v] = u$ to mark the existence of an edge $(u, v)$ in the inner forloop. Since we lookup in the adjacencylist $Adj$ for $V + E$ times, the time complexity is $O(V + E)$.
22.15¶
The square of a directed graph $G = (V, E)$ is the graph $G^2 = (V, E^2)$ such that $(u, v) \in E^2$ if and only if $G$ contains a path with at most two edges between $u$ and $v$. Describe efficient algorithms for computing $G^2$ from $G$ for both the adjacencylist and adjacencymatrix representations of $G$. Analyze the running times of your algorithms.

Adjacencylist representation
To compute $G^2$ from the adjacencylist representation $Adj$ of $G$, we perform the following for each $Adj[u]$:
where $Adj2$ is the adjacencylist representation of $G^2$. For every edge in $Adj$ we scan at most $V$ vertices, we compute $Adj2$ in time $O(VE)$.
After we have computed $Adj2$, we have to remove duplicate edges from the lists. Removing duplicate edges is done in $O(V + E')$ where $E' = O(VE)$ is the number of edges in $Adj2$ as shown in exercise 22.14. Thus the total running time is
$$O(VE) + O(V + VE) = O(VE).$$
However, if the original graph $G$ contains selfloops, we should modify the algorithm so that selfloops are not removed.

Adjacencymatrix representation
Let $A$ denote the adjacencymatrix representation of $G$. The adjacencymatrix representation of $G^2$ is the square of $A$. Computing $A^2$ can be done in time $O(V^3)$ (and even faster, theoretically; Strassen's algorithm for example will compute $A^2$ in $O(V^{\lg 7})$).
22.16¶
Most graph algorithms that take an adjacencymatrix representation as input require time $\Omega(V^2)$, but there are some exceptions. Show how to determine whether a directed graph $G$ contains a universal sink $$ a vertex with $\text{indegree}$ $V  1$ and $\text{outdegree}$ $0$ $$ in time $O(V)$, given an adjacency matrix for $G$.
Start by examining position $(1, 1)$ in the adjacency matrix. When examining position $(i, j)$,
 if a $1$ is encountered, examine position $(i + 1, j)$, and
 if a $0$ is encountered, examine position $(i, j + 1)$.
Once either $i$ or $j$ is equal to $V$, terminate.
ISCONTAINUNIVERSALSINK(M)
i = j = 1
while i < V and j < V
// There's an outgoing edge, so examine the next row
if M[i, j] == 1
i = i + 1
// There's no outgoing edge, so see if we could reach the last column of current row
else if M[i, j] == 0
j = j + 1
check if vertex i is a universal sink
If a graph contains a universal sink, then it must be at vertex $i$.
To see this, suppose that vertex $k$ is a universal sink. Since $k$ is a universal sink, row $k$ will be filled with $0$'s, and column $k$ will be filled with $1$'s except for $M[k, k]$, which is filled with a $0$. Eventually, once row $k$ is hit, the algorithm will continue to increment column $j$ until $j = V$.
To be sure that row $k$ is eventually hit, note that once column $k$ is reached, the algorithm will continue to increment $i$ until it reaches $k$.
This algorithm runs in $O(V)$ and checking if vertex $i$ is a universal sink is done in $O(V)$. Therefore, the total running time is $O(V) + O(V) = O(V)$.
22.17¶
The incidence matrix of a directed graph $G = (V, E)$ with no selfloops is a $V \times E$ matrix $B = (b_{ij})$ such that
$$ b_{ij} = \begin{cases} 1 & \text{if edge $j$ leaves vertex $i$}, \\ 1 & \text{if edge $j$ enters vertex $i$}, \\ 0 & \text{otherwise}. \end{cases} $$
Describe what the entries of the matrix product $BB^\text T$ represent, where $B^\text T$ is the transpose of $B$.
$$BB^\text T(i, j) = \sum\limits_{e \in E}b_{ie} b_{ej}^\text T = \sum\limits_{e \in E} b_{ie}b_{je}.$$
 If $i = j$, then $b_{ie} b_{je} = 1$ (it is $1 \cdot 1$ or $(1) \cdot (1)$) whenever $e$ enters or leaves vertex $i$, and $0$ otherwise.
 If $i \ne j$, then $b_{ie} b_{je} = 1$ when $e = (i, j)$ or $e = (j, i)$, and $0$ otherwise.
Thus,
$$ BB^\text T(i, j) = \begin{cases} \text{degree of $i$ = indegree + outdegree} & \text{if $i = j$}, \\ \text{$$(\# of edges connecting $i$ and $j$)} & \text{if $i \ne j$}. \end{cases} $$
22.18¶
Suppose that instead of a linked list, each array entry $Adj[u]$ is a hash table containing the vertices $v$ for which $(u, v) \in E$. If all edge lookups are equally likely, what is the expected time to determine whether an edge is in the graph? What disadvantages does this scheme have? Suggest an alternate data structure for each edge list that solves these problems. Does your alternative have disadvantages compared to the hash table?
The expected lookup time is $O(1)$, but in the worst case it could take $O(V)$.
If we first sorted vertices in each adjacency list then we could perform a binary search so that the worst case lookup time is $O(\lg V)$, but this has the disadvantage of having a much worse expected lookup time.