Mark Tygert's homepage

Representative research contributions

  1. Detecting differences while conditioning on (i.e., controlling for) covariates:

    Graphs of cumulative differences and associated variants of the Kolmogorov-Smirnov and Kuiper statistics help gauge calibration or subpopulation deviation without making any particular tradeoff between resolution and statistical confidence (unlike the traditional reliability diagrams and calibration plots), as detailed in

    (presentation) Mark Tygert, "Conditioning on and controlling for variates via cumulative differences: measuring calibration, reliability, biases, and other treatment effects," slides: pdf + mov, recording: mp4,
    (paper) Mark Tygert, "Cumulative deviation of a subpopulation from the full population," Journal of Big Data, 8 (117): 1-60, 2021: pdf,
    (paper) Imanol Arrieta-Ibarra, Paman Gujral, Jonathan Tannen, Mark Tygert, and Cherie Xu, "Metrics of calibration for probabilistic predictions," Journal of Machine Learning Research, 23: 1-54, 2022: pdf,
    (paper) Mark Tygert, "Controlling for multiple covariates," arXiv, Technical Report 2112.00672, 2021: pdf,
    (paper) Mark Tygert, "A graphical method of cumulative differences between two subpopulations," Journal of Big Data, 8 (158): 1-29, 2021: pdf,
    (paper) Mark Tygert, "Calibration of P-values for calibration and for deviation of a subpopulation from the full population," Advances in Computational Mathematics, 49 (70): 1-22, 2023: pdf.
    (paper) Isabel Kloumann, Hannah Korevaar, Chris McConnell, Mark Tygert, and Jessica Zhao, "Cumulative differences between paired samples," arXiv, Technical Report 2305.11323, 2023: pdf.

  2. Explanatory and normative studies of convolutional networks:

    Most of my work on designing and understanding convolutional networks amounts to tweaking or cleaning up existing methodologies. The main exception is detailed in

    Joan Bruna, Soumith Chintala, Yann LeCun, Serkan Piantino, Arthur Szlam, and Mark Tygert, "A mathematical motivation for complex-valued convolutional networks," Neural Computation, 28 (5): 815-825, 2016: pdf.
  3. FastMRI:

    We made some key contributions to clinically accepted acceleration of MRI, building on earlier techniques in deep learning from NYU and the optical illusion that adding noise can sharpen an image -- see, for example, "Why does grain increase acutance?" Errors from our acceleration and diagnostically sound dithering are smaller than the machine errors already present in MRI. Siemens, Philips, GE, and AIRS currently market FDA-approved products based partly on our work as summarized at

    our blog post, Siemens' pdf, Philips' site, GE's pdf, and AIRS' news (with significant innovations beyond ours).
  4. Randomized algorithms for linear algebra:

    Randomization recently revolutionized numerical methods for linear algebra. We have been contributing to many aspects of this movement, especially with regard to improvements important in practice. The key to realizing the widely touted benefits of randomization for the analysis of real, noisy data has turned out to be

    Vladimir Rokhlin, Arthur Szlam, and Mark Tygert, "A randomized algorithm for principal component analysis," SIAM Journal on Matrix Analysis and Applications, 31 (3): 1100-1124, 2009: pdf.
  5. Efficient algorithms for special-function transforms:

    Generalizing the fast Fourier transform to families of functions other than the sinusoidal Fourier modes ranges from convenient to critical for many applications, most notably in spectral methods for numerical computations on the sphere. Effective methods (building on others' innovations) for calculations on continuous domains emerged in

    Mark Tygert, "Fast algorithms for spherical harmonic expansions, II," Journal of Computational Physics, 227 (8): 4260-4279, 2008: pdf.

    Mark Tygert, "Fast algorithms for spherical harmonic expansions, III," Journal of Computational Physics, 229 (18): 6181-6192, 2010: pdf.

  6. Refined tests of statistical significance:

    Statistical significance testing is due for an overhaul, especially in light of the now widespread availability of modern computers. Our foremost stab at this is the still evolving

    William Perkins, Mark Tygert, and Rachel Ward, "Computer-enabled metrics of statistical significance for discrete data," 1-157, 2014: pdf.