Jeffrey Walraven
IEEE FOCS 2019
Date: | Category: Conference Tags: IEEE, Computer Science, ConferenceOverview
The IEEE Foundations of Computer Science Conference is a yearly gathering of Computer science researchers to present the best of their research. The primary format is a single presenter breaking down or giving an overview of a research paper that they authored or co-authored. This year there was a lot of focus on machine learning, graph theory, auction game theory, algorithm complexity analysis, and quantum computation. I personally found the sections on cryptography the most engaging. As a Software Engineer, my hope was to gain a better understanding of what is occurring in the Computer Science research realm, challenge myself academically, and improve my understanding the Computer Science community.
Key Takeaways
Here are a few personal takeaways from the conference:
IEEE researchers are very intelligent and driven. I gained a newfound respect for the work they do and the way they do it. They each have unique perspectives on problems and take many different pathways to solve them. They come from all over the world and work hard to come up with new proofs and solutions.
Correctness and provability are extremely important to Computer Science researchers. They recognize that industry needs are moving quickly and that, at times, implementers will outpace their research. But they also see that sometimes the solution that gets you by is not the one you want for the forseeable future. There were multiple talks in which the presenter acknowledge an industry standard practice that was unproven. They attempted to prove out a new and better solution.
Google still needs to prove their “quantum supremacy”. Google announced in Oct 2019 that it had reached quantum supremacy (see: Google Claims ‘Quantum Supremacy’). Of course, researchers are still skeptical and there is some work being done on how to prove that a “true” quantum computer has been created. See the notes/paper by Alexandru Gheorghiu and Thomas Vidick “Computationally-secure and composable remote state preparation” for more info.
Machine Learning is a hot topic in the research community. It is moving very quickly with new proofs and ideas. We will likely see a lot of great improvements made to algorithms in this area in the next few years.
There is a large gap between theory and practice. This gap has increased over time and it is very difficult to bridge that difference of understanding today. This was made very clear by some presentations entirely devoid of discussion as to why a topic was important. Some presenters were better at listing example applications, though sometimes even those were just applications to theories. It is important as Software Engineers that we understand that this split exists and where we fit into improving/understanding the pipeline from theory to implementation.
But there is hope for more directly practical research in the future! Some of the talks focused on the problems of the implied obfuscations in the Computer Science research community as a whole. Interestingly, there was a talk on communication where the author explicitly called out extraordinarily long proofs as a serious issue that must be addressed (see “Context In Communication”). There were also several talks on more practical mathematical analysis than Worst Case Analysis. They called the series of talks “Beyond Worst Case Analysis” and each of them addressed some area in which a more practical metric existed. These metrics were more focused on real world application and making sure algorithms work best not just on paper but in practice.
Favorite Presentations
- Local (Sequential) Computation Algorithms - Ronnit Rubinfield
- An Optimal Document Exchange Protocol - Bernhard Haeupler
- NEEXP $\subseteq$ MIP* - Anand Natarajan
- Perfect zero knowledge for quantum multiprover interactive proofs - Alex B. Grilo
- Fast generalized DFTs for all finite groups - Chris Umans
- A Quantum Query Complexity Trichotomy for Regular Languages - Luke Schaeffer
- Computationally Secure and Composable Remote State Preparation - Alexandru Gheorghiu
Presentation Notes
Distribution-Free Models
Presented by Tim Roughgarden
Overview
This discussed graph models in which analysis could take place on algorithms to find “correctness” as well as performance.
Notes
- BWCA = Beyond Worst Case Analysis = Where worst case gives bad advice
- Publishing Book in Early 2020
Part 1
- No consensus on social network models
- Triadic Closure (Ganovetter)
- Power Law doesn’t help much for hard problems (NP)
- Triangle Dense Graphs
- Transitivity
- FB == 16%
- Inverse Theorem
- A graph with constant transitivity is approximately the union of the cliques
- Proof:
- Definition: Jaccard Similarity (Num of triangle in graph)
- Sub-routine: The Cleaner
- 1-hope fails: complete tripartite graph
- The Extractor: greedily add verticies to fill triangles
- Non trivial Lemma
Part 2
- c-Closed Graphs (c-vertices)
- Strongly closed
- Weakly closed
- Enumerate Maximal Cliques
- Can in polynomial time
- Focus on subset Clique
- Open Questions
- Find a quantitative solution
- Stronger assumptions
- Distribution free
BWCA In Algorithmic Game Theory
Presented by Inbal Falgam-Cohen
Overview
Applying practical analysis to game theory - specifically the auction / contract design theories.
Notes
- Algorithms vs microeconomics
- Worst-case vs Average-case
- Worst-case = More robust
- Average-case = Useful + faster
- BWCA bridges the gap
- Simple mechanisms used in practice
- Even with suboptimal worst-case + average case
- Practical cases make it worth it
- Mechanism Design: incentives + limited info
- Agents want it specifically for themselves (greedy agents)
- Application Areas
- Auction Design
- Content Design
- Bayesian Analysis
- Average case limitations
- Brittle
- Don’t know inside information
- Worst Case
- Non starter for revenue
- Average case limitations
- Semi Random Model
- Know class of prior distributions
- Don’t have to know all distributions
- Maximize relative performance
- Know class of prior distributions
- Conclusion: Need BWCA to analyze simple mechanisms
Certified Algorithms
Presented by Yury Makarychev and Konstantin Mkarychev
Overview
Focused on a pattern for defining algorithms in a specific class of performance not defined by worst-case/average case complexity. This was named “Certified”.
Notes
- Overview
- Perturbation Resilience
- New Certified algorithms
- Real life instances = much easier
- “Clustering is only difficult when it doesn’t matter”
- NOTE: Remember case of three clusters. The obvious solution was easy.
- Perturbation Resilience means that it is stable (does not get perturbed)
- If it has same optimal solution after perturbance
- In ML we want to find “true” solution
- There is a relation between approximation algorithm + RR instance
- alpha and omega are different
- Combine both to give PR for performance increase and approximate for fallback
- Example: Agglomerative clustering
- Key Takeaway: IF a problem is hard, find another path
- Conclusion: Use certified algorithms in cases where worst-case does not make sense
Random Shortest Path Metrics
Presented by Bodo Manthey
Overview
Proposes a method for analyzing shortest path algorithms. Specifically focuses on the case of the traveling Salesman Problem (TSP).
Notes
- Many optimization problems have a random structure
- First-passage Percolation = RSP
- Distribution of Random Shortest Path
- $\sum_{}^{}(d(r, v)) = 1 \cdot \frac{H_{n-1}}{n} \approx \frac{ln (n)}{n}$
- Global structure
- Nearest Neighbor for TSP
- Can use any probability distribution
- Exponential is easier
- 2 Op Heuristic Approximation
- Open Problems
- More sparse graphs
- Directed graphs
How to hide a clique?
Presented by Uriel Feige
Notes
- With high probability
- Largest clique = 2 log n
- Greedy possibly log n
- Hidden/Planted
- A set S of k random vertices are made into a clique
- Max can be found in polynomial time
- See: Hiding cliques for Cryptographic security
Context in Communication
Presented by Madhu Sauan
Overview
Provided insight into the history and new research in the area of unbounded communication. Concluded that we need to provide better context for proofs or they grow exponentially. Context for proofs should be given in the form of axioms. I.e. instead of trying to solve everything in one go, break it into smaller “learnable” pieces.
Program Obfuscation
Presented by Huijia (Rachel) Lin
Overview
Provided insight into recent research + implementations in the area of program obfuscation. There is new light at the end of the tunnel regarding program obfuscation securely and without crypto leaks.
Data Driven Algorithm Design
Presented by Maria-Florina Balcan
Overview
Use algorithmic learning to drive algorithm design.
Notes
- Classic algorithm design: Solve worst case instance
- Data driven algorithm design: use learning + data
- Until recently there was no proof
- Make provable grantees
- Example: Clustering Problems
- Will algorithm work on new problems?
- Need formal guarantees
Local (Sequential) Computation Algorithms
Presented by Ronitt Rubinfield
Overview
Provided a tutorial on using local computation algorithms for efficient low-knowledge distributed computing.
Notes
- What do you do for Large Input + Large Output?
- Do I need to compute the full output?
- Do I need to see the full input?
- Greedy algorithm has drawbacks
- Local Computation Algorithms in Swarms
- Make probes from queries to input
- Compute independently (not distributed)
- Provide “illusion” of fully constructed output
- Distributed Algorithms to the rescue!
- “Luby” Algorithm
- Remaining live components are small!
- Distributed Markov Chain round
How to Use Heuristics for Differential Privacy
By Seth Neel, Aaron Roth, Zhiwei Steven Wu; Presented by Seth Neel
Notes
- Differential Privacy Definition
- Optimization Oracles
- Weighted
- Heuristic
- Certified Heuristic
- Main Result: PRSMA
The Role of Interactivity in Local Differential Privacy
By Matthew Joseph, Jieming Mao, Seth Neel, Aaron Roth; Presented by Matthew Joseph
Overview
Providing privacy for data against ML algorithms.
Notes
- DMNS06 Differential Privacy (DP)
- Data -> Learning -> Noise -> Output
- Makes algorithms modular
- More restrictive = Local DP
- Data can be more isolated
- Local DP
- No central DB (users keep data)
- Don’t have to store private data
- Costs
- More noise
- Don’t get to store data ;)
- Types
- Non-interactive
- Sequentially interactive
- Fully interactive
Learning from Outcomes: Evidence-Consistent Rankings
By Cynthia Dwork, Michael P Kim, Omer Reingold, Guy N. Rothblum, Gal yona; Presented by Michael P. Kim
Overview
Many ranking algorithms end up with bias do to errors. How do we remove those errors and observer whether the ranking is “correct”?
Notes
- Why rankings?
- Underlying risk prediction
- Learn to rank without observing probabilities directly
- Small errors in ranking lead to systemic misrepresentation
- Studying ranking informs understanding of prediction
Collaborative Learning with Limited Interaction: Tight Bounds for Distributed Exploration in Multi-Armed Bandits
By Chao Tao, Qin Zhang, Yuan Zhou; Presented by Qin Zhang
Overview
To scale machine learning, you must use some sort of collaborative agents that stay in communication. This communication introduces difficulties and needs to be bounded.
Notes
- How to make machine learning scalable?
- Collaborative Agents
- Interaction between agents can be expensive
- Communication starts if all agents are in wait mode
- Speedup $\frac{T(best cen)}{A}$
- Two new techniques for proving round lower bounds
Adversarial Bandits with Knapsacks
By Nicole Immorlica, Karthik Abinav Sankararaman, Robert Schapire, Aleksandrs Slivkins; Presented by Karthik Abinav Sankararaman
Notes
- Maximize objective returns within constrains
- Bandit = Choose action; observe corresponding result
- Result:
- Optimal algorithm
- Both expectations + high probability
- Why BwK is hard?
- Distribution over arms beats any fixed arms
- Exploration consumes resources
- Why is adversarial BwK harder?
- Wild guesses about future sequences
Approximation Schemes for a Buyer with Independent Items via Symmetries
By Pravesh Kothari, Divyarthi Mohan, Ariel Schvartzman, Sahil Singla, S. Matthew Weinberg; Presented by Divyarthi Mohan
Notes
- How to maximize revenue?
- Align mechanisms with buyer incentives
- Truthful mechanism
- Find an optimal truthful mechanism
- Valuation
- Unit Demand = max (one item)
- Use approximation to get 1/4 for polynomial time
- But we can do better!
- 1-epsilon approximation in quasi-polynomial time (almost optimal)
- Symmetric Distributions are solvable
- Take Away: Bounded ditros are a mystery + symmetries rock!
Improved Truthful Mechanisms for Combinatorial Auctions with Submodular Bidders
By Sepehr Assadi, Sahil Singla; Presented by Sahil Singla
Notes
- Combinatorial Auctions: Welfare Maximization
- Warm-up
- Additive: imply choose highest value for item
- Truthful mechanism
- Returns allocations and payments
- Goal: maximize welfare
- Vickery-Clark-Groues algorithm: works but not efficiently
- Fixed Price Auctions (FPA)
- Adversarial order + select best subset of remaining items
- Truthful and efficient given fixed prices
- Use FPAs as a “proxy” to learn prices
Settling the Communication Complexity of Combinatorial Auctions with Two Subadditive Bidders
By Tomer Ezra, Michal Feldman, Eric Neyman, Inbal Talgam-Cohen, S. Matthew Weinberg; Presented by Eric Neyman
Notes
- Nontrivial welfare guarantees are impossible
- Make assumption about $v_i(\cdot)$!
- Use sub-additivity
- How can we use 2 bidders in polynomial communication?
- 2 is extra interesting (see paper)
- Trivial approximation only gets 1/2 approximation
- Prove that this is optimal
- Showing this is hard
- $V_A$ and $V_B$ -> Complement additivity
- $V(\cdot)$ is CA if (see paper)
- Extend to randomized protocols for equality
- Maybe find mean probability
Reed-Muller codes polarize
By Emmanuel Abbe, Min Ye; Presented by Min Ye
Notes
- Coding
- Message Length k: Code dimension
- Encoded length N: Code Length
- Most based on Shannon Codes
- Another type: Reed-Muller
- Do they achieve capacity?
- Advances on general channels (this talk)
- Proving polarization of Code
- Only 0 and 1 are stable -> therefore $G_n$ polarizes
SETH-hardness of Coding Problems
By Noah Stephens-Davidowitz, Vinod Vaikuntanathan; Presented by Noah Stephens-Davidowitz
Notes
- $F_2={0,1}$
- $F_{2}^m={0,1}^m$
- NCP = Natural Codeword Problem
- Trivial algorithm: $2^n$
- NP-hardness
- No nontrivial solution via SETH
- SETH = The Strong Exponential Time Hypothesis
- k-SAT Clause
- Minimum Distance Problem (MDP)
- Algorithmically similar to NCP
Quasilinear time list-decodable codes for space bounded channels
By Swastik Kopparty, Ronen Shaltiel, Jad Silbak; Presented by Jad Silbak
Notes
- Binary List Decodable Codes
- See diagram in notebook
- Hamming Channels
- Shannon Channels
- Lipton [LIP94] -> Online Space Channels
- OSC
- Reads input n 1 pass using space S
- Read-Solomon works
- $Raw -> RS_Raw$
An Optimal Document Exchange Protocol
By Bernhard Haeupler; Presented by Bernhard Haeupler
Notes
- Two parties with similar strings (but slightly different)
- See digram in notebook
- ECC: Error Correcting Codes
- Systematic ECC = Deterministic Document Exchange
- Applications:
- Document Exchange:
- File Sync
- DB Maintenance
- Version Control
- Caching
- etc.
- ECC:
- Disk Reads
- DNA sequencing
- etc.
- Document Exchange:
- ECCs for edit distance
- Started back in the 60s but wasn’t improved until recently [BZ16]
Radio Network Coding Requires Logarithmic Overhead
By Klim Efremenko, Killat Kol, Raghuvansh R. Saxena; Presented by Raghuvansh R. Saxena
Notes
- Collision as Silence Model
- Undirected graph
- Communication proceeds in rounds
- More than 1 in round = collision
- Node will not receive message
- Silence
- Noise in radio networks
- Messages dropped with probability of 1/2
- Worst case is O(long n)
Lower bounds for maximal matchings and maximal independent sets
By Alkida Balliu, Sebastian Brandt, Juho Hirvonen, Dennis Olivetti, Mikael Rabië, Jukka Suomela; Presented by Sebastian Brandt
Notes
- Two Classical Problems: Maximal matching + Maximal indpendent set
- Can this be solved efficiently in a distributed setting?
- Approach: Simple Algorithm
- Send message in rounds
- This is optimal!
- Proof sketch:
- See notebook or paper
Automating Resolution is NP-Hard*
By Albert Atserias, Moritz Müller; Presented by Albert Atserias
Notes
- Satisfyiability Problem and Resolution
- 200 TB Proof generated!
- Could we find short proofs under the promise that they exist?
- Theorem: Resolution is not automatable in polynomial time unless P=NP
- Theorem: Tree-like resolution is automatable in time $n^{O(s)}$
- Reflection Principle for Resolution
- Automating resolution is NP hard!
NEEXP $\subseteq$ MIP*
By Anand Natarajan, John Wright; Presented by Anand Natarajan
Notes
- Interactive Proofs
- Polynomial-time verifier
- Multi-prover Interactive Proofs (MIP)
- MIP = NEXP [BFL91]
- Provers are all powerful but cannot communicate
- Can’t use classical physics
- Could use Quantum Entanglement (high probability of “randomness”)
- Entanglement can hurt soundness
- Honest provers need exp(x) qubits
- Compressing with entanglement
- Question reduction exp(n) -> poly(n)
- Answer reduction exp(n) -> poly(n)
- Answer can use general compression
- Question needs to use quantum entanglement
- Only sound if Alice only knows x and Bob only knows y for D~(x,y)
- Magic Square
- No consistent assignment
- Quantum Assignment: cell gets random quantum value that satisfies constraints!
- Scale up magic square using PCB technique etc.
Perfect zero knowledge for quantum multiprover interactive proofs
By Alex B. Grilo, William Slofstra, Henry Yuen; Presented by Alex B. Grilo
Notes
- Zero Knowledge Quantum research is at the intersection of crypto and quantum research
- Multi-prover interactive proofs
- Go multistep proof
- In cryptography we want zero-knowledge
- Should no learn more than is given by provers
- Perfect zero-knowledge: distributions of X and Y are equal (where X and Y are the pieces Alice and Bob have)
- Quantum Proofs: Stick “Quantum” in front of traditional provers
- QMA, QIP, MIP* (instead of QMIP - MIP* works in this case)
- Perfect Quantum Zero-Knowledge . . .
- $PZK-MIP* = MIP* \to MIP* \supseteq NEEXP$
- Simulatable Codes ~ Steane Code
- No matter what 2 qubits in code $P = 1/4$
- Simulatable Codes Non-traversal Gates
- Computation by teleportation with magic state
- Logical output is the totally mixed state
- Compression schemes [FJVV18]
- Open Questions
- More opp. to situatable codes in quantum crypt
- $MIP^{ns} = PZK - MIP^{ns}$?
Leakage-Resilient Secret Sharing Against Colluding Parties
By Ashutosh Kumar, Raghu Meka, Amit Sahai; Presnted by Ashutosh Kumar
Notes
- Secret Sharing
- Correctness
- Secrecy
- Leakage? ShamirShare can have leakage
- Leakage-Resiliance: secret statistically hidden even given leakage from each share
- Advesary runs multi-party protocol to “learn” transcript
- Bounded Collusion Protocols (BCP)
- p-party collusion protocol
- 1-party CP: Number is hard (NIH)
- (n-1)-party CP: Number-on-forehead (NOF)
- p-party collusion protocol
- Non Mallable Secret Sharing (NMSS)
- If shares are tampered with, modified encoded message is uncorrelated with unmodified message
- Theorem: Can convert any secret sharing into LRNMSS
- Only allowed to share because of overlying subsets of overlying scheme
- Open Problems
- Leakage from disjoin subsets? (without overlapping scheme)
- Leakage resilient multi-party computation?
- Join-leakage?
Laconic Conditional Disclosure of Secrets and Applications
By Nico Döttling, Sanjam Garg, Vipul Goyal, Giulio Malavolta; Presented by Giulio Malavolta
Notes
- Model 2-round protocol for conditional disclosure
- Applciations
- Complexity-reserving “maliciousifier” (made up word)
- Implicit succinct argument
- Explicit is not succinct
- Delegating computation of a function F with rewards
- E.g. mining pools in cryptocurrencies
- Complexity-reserving “maliciousifier” (made up word)
- Theorem: If CDH, LWE is hard . . . (see paper)
- Approach
- Probabilistically Checkable Proofs (PCP)
- Garbled
- Laconic OT
- Cryptography is interested in negligible soundness
- Laconic OT are only secure against a semi-honest receiver
Non-Malleable Commitments using Goldreich-Levin List Decoding
By Vipul Goyal, Silas Richelson; Presented by Sila Richelson
Notes
- Semantic Security [Goldwasser-Micai]
- Non-Malleability
- Security against an active adversary
- Adversary wins if he can discover Enc(0) and Enc(1) given m'
- Commitment Scheme
- Commit Phase (locked box)
- Decommit Phase (key)
- Non Malleable $Alice \to MIM \to Bob$
- Applications
- Secure protocol Comp, MPC, etc.
- MIM Types
- Synchronizing: left to right and down
- Sequential: down to left and right
- Extractable Commitment (ExtComm(m))
- Theorem: Any EC is non-malleable against sequential MIM
- Protocol is not extractable but suffices for non-malleability
- Choose k dynamically; $k=O(log(n))$, even if A cheats
Fast generalized DFTs for all finite groups
By Chris Umans; Presented by Chris Umans
Notes
- Discrete Fourier Transforms (DFT)
- Algorithms paper for performance
- Computable in $O(n log n)$ time using FFT
- Group representation
- Finite group G
- Representation of G is a homomorphism
- Irreducible if no invariant subspace
- Finite set of irreducible representations
- Generalized DFT
- Fix group G with irreps $P_1, P_2, P_3, . . ., P_k$ and scalars $a_0, a_1, . . ., a_n$
- Applications
- FFT
- DSP
- Image Processing
- Communication
- etc.
- . . .
- FFT
- Double Subgroup Reduction
- “Looks like” matrix-valued DFT
- Triple Subgroup Reduction
- Fast Matrix Multiplication for G-DFT
Waring Rank, Parameterized and Exact Algorithms
By Kevin Pratt; Presented by Kevin Pratt
Notes
- Outline
- Algorithmic importance
- Faster approximate counting of subgraphs
- Performant Waring Rank
- Algorithmic importance
- Waring Rank / Symmetric Tensor Rank
- Classic algebraic geometry
- d=2 matrix
- Use the Elementary Symmetric Polynomials
More barriers for rank methods, via a “numeric to symbolic” transfer
By Ankit Garg, Visu Makam, Rafael Oliveira, Avi Wigderson; Presented by Visu Makam
Notes
- Lower bounds for tensor rank $VA \neq VNP$ (via personal opinion)
- Most lower bound were (in the past) from lifting techniques
- Potency of lifting lower bounds by small degree to large degree tensors
- Rank: Additive complexity
- S = simples = rank 1 tensors
- Lifting techniques (rank methods)
- E.g. $Ten(n, 9) \rightarrow Ten(n^3, 3)$
- For any linear map $\emptyset$: $Ten(n, 9) \rightarrow Ten(m, 3)$ we have $P(\emptyset) \geq n^3$?
- How do we go beyond barriers?
Towards a theory of non-commutative optimization
By Peter Bürgisser, Cole Franks, Ankit Garg, Rafael Oliveira, Michael Walter, Avi Wigderson; Presented by Cole Franks
Notes
- Groups $\rightarrow$ Vector Spaces
- Norm and Gradient
- Groups with Euclidean Geometry
- Examples: Operator Scaling, Tensor Scaling, . . .
- Applications
- Machine Learning
- Quantum Computation
- Perfect Matching
- etc.
- Non communitive analogue: moment map
- Kemp-Ness / Hilbert Mumford
- The communitive case: polynomial optimization
- [SV17] optimize in poly from oracle
- Geodesics: analogues of lines in non-Euclidean space (e.g. the hyperbolic plane)
- Algorithms
- Geodesic Convexity
- Geodesic gradient descent for scaling
- Trust region method expansion
- Open Problem: Geodesic Ellipsoid method
A Quantum Query Complexity Trichotomy for Regular Languages
By Scott Aaronson, Daniel Grier, Luke Scaeffer; Presented by Luke Schaeffer
Notes
- What is $Q_{AND}(n)$
- Theorem: Grover Search
- Theorem: BBBV
- Theorem (new): Fancy Grover Search
- Regular Languages
- Forms
- Empty Set $\emptyset$
- Set containing only empty string ${\varepsilon}$
- $\sum$ singletons e.g. ${a}$
- Examples
- Empty language
- $Q_{\emptyset} (n) = 0 = O(1)$
- Strings ending or ending with 0 $= O(1)$
- OR and AND $= \theta(\sqrt{n})$
- Parity $= \theta(n)$
- $MOD_k = \theta(n)$
- Empty language
- Anything that is not “star free” is hard
- Redefine Query complexity
- Forms
- Star-free languages
- $\emptyset$, ${\varepsilon}$, and all $a \epsilon \sum$ closed under union, concatenation, and complement
- Algorithm 1
- Grover
- Subroutine to test if item is marked
- Complexity: $O(n^{3/2})$ - too slow!
- Grover
- Algorithm 2
- Grover Search
- Subroutine binary Grover search to test if item is marked
- Complexity: $\tilde{O}(n)$
- Grover Search
- Algorithm 3
- Use algorithm # with random l
- Complexity: $\tilde{O}(\sqrt{n})$
- Applications
- First Order Logic
- Pathways in Grids
- Summation
- . . .
Exponential Separation between Quantum Communication and Logarithm of Approximate Rank
By Makrand Sinha, Ronald de Wolf; Presented by Makrand Sinha
Notes
- Communication Complexity
- How much communication is needed?
- Does log-rank give upper bound? See conjecture
- Results
- Simpler Proof of Classical
- Quantum Communication $n^1/6$
- $Sink(x,1) = 1 \leftrightarrow \exists sink$
- Complete undirected graph on $\approx \sqrt{n}$ verticies, n edges
- V is sink iff $X_{N(v)} \oplus Y_{N(v)} = 1$
- Hint: if you are not familiar with quantum mathematics, replace with random algorithm - results should be about the same
Quantum advantage with noisy shallow circuits in 3D
By Sergy Bravyi, David Gosset, Robert Konig, Marco Tomamichel; Presented by Sergy Bravyi
Notes
- Constant Depth Quantum Circuits
- Constant-time quantum computation
- Quantum computers without error correction
- Topological Quantum Order
- Quantum Supremacy?
- Quantum $d=O(1)$ with classical $d \geq c \cdot log(n)$
- Is quantum superior still if too noisy?
- No help from error correction - requires constant depth time
- Local Stochastic Noise $E ~ N(p)$
- Behave nicely with composition
- Clifford gates can be controlled by classical bit b=0,1
- Logical states of good Quantum codes are highly entangled
- Cannot be prepared in constant depth
- Combine stochastic noise with classical circuit for error correction
Stoquastic PCP vs. Randomness
By Dorit Aharonov, Alex B. Grilo; Presented by Alex B. Grilo
Notes
- NOTE: Stoquastic is a blend between stochastic and quantum. It implies stochastic noise applied to quantum, but the terms can be used interchangeably.
- Polynomial algorithms should be derandomizable
- MA vs NP
- In MA verification can be randomized
- MA is believed to equal NP
- Quantum PCPs are hard
- Hamiltonian Complexity
- If amplification procedure, then MA = NP
- MA Verification:
- Initial String
- Performa random walk for poly(n) steps
- If bad string encountered, then reject
- “Classical” definition of problem can be used
- Without diving into quantum
- Using PCPs
Computationally-secure and composable remote state preparation
By Alexandru Gheorghiu, Thomas Vidick; Presented by Alexandru Gheorghiu
Notes
- Google officially lays claim to quantum supremacy!
- How do we verify that claim?
- How do we verify any claim to have a quantum computer?
- Verification
- Verifier + Prover
- Time + items queued
- Verifier + Prover
- Features
- Messages are (ideally) classical
- Verifier and prover efficiency
- Computation not revealed to the prover (blindness)
- Protocol is composable
- Existing Approach
- Prepare and Send
- Receive and Measure
- Entanglement Analysis
- New Approach
- Mahadev'18 Protocol: Learning with Errors (LwE)
- Hard for quantum computers. Classically commit to quantum state.
- Challenge then response for cryptography algorithm
- Prover can’t learn
- Cons:
- Not obviously composable or easy to modify
- Not blind
- Runs in linear time
- Mahadev'18 Protocol: Learning with Errors (LwE)
- Improved Algorithm
- Extend commitment scheme (Remote State Preparation)
- Change in 3 places
- Quantum Random Access Code
- Encode 2 classical bits into 1 qubit
- Cannot achieve this perfectly!
- But has success probability of 85%
- If the prover is getting close to optimal probability, then it is assumed to be performing properly.
- Longer start up $O(G^4)$ but faster at runtime
- Extend commitment scheme (Remote State Preparation)
Efficient Construction of Rigid Matrices Using an NP Oracle
By Josh Alman, Lijie Chen; Presented by Lijie Chen
Notes
- Matrix is rigid if it is far from low rank (Hamming distribution)
- $R_M(r) =$ minimum number of entiries change in M to make rank $\leq r$
- Goal: Construct without randomness
- Application:
- Communication Complexity
- Stronger new lower bound
- Proof Overview
- PCP
- Construct special circuit class
- Find lower bound on special circuit class
- Use [Williams14] approach to convert algorithm into lower bounds
- Use an efficient PCP verifier V with O(1) queries!
- Summary
- Non-trivial Razborov-rigid matrix
- Key Properties
- Faster #SAT algorithms
- Locally Decodable Representation
Faster Minimum k-cut of a Simple Graph
By Jason Li; Presented by Jason Li
Notes
- Improved to $O_k(n^{(1 + O(1)k)})$
- Only works on simple graphs
- Sparcify graph
- $\bar{n}:=n/\lambda_k$ vertices
- Preserves min k-cut
- Tree packing
- Color Coding
- T is a “spider” (everything “hangs” off of root)
- Future Directions
- More graphs (general weighted graph)?
- min k-cut in time
- Deterministic $n^k$ time?
Truly Optimal Euclidean Spanners
By Hung Le, Shay Solomon; Presented by Shay Solomon
Notes
- Graph is induced by set of points in the plane
- Params
- Size
- Sparcity(H) = $\frac{|E(H)|}{n - 1}$ (ideally O(1))
- Lightness(H) = $\frac{w(H)}{w(MST(s))}$
- Applications
- Approximate algorithms
- Network design
- Compact routing
- . . .
- Summary
- Tight
- Quadratic improvement for size and lightness
- Surprising use of Steiner points
- Greedy spanner is optimal in $\mathbb{R}^d$
- Steiner Points (hard part - see paper)
- Upper-bound
- Warm up observation
- Recursive construction
- Upper-bound