diff --git a/slides/best-practices.html b/slides/best-practices.html
index 17ec98797dd1ac9db9e0e740f4021b32923e8e94..92248dbfd7f9deb0b00ef3f735b2aff0317751ce 100644
--- a/slides/best-practices.html
+++ b/slides/best-practices.html
@@ -34,9 +34,9 @@ Building for Accessibility
 - Structure impacts navigation order
 - Need to announce things that change
 
+<!-- [//]: # TODO expand this slide deck? IT's short and could cover more. Also discuss how this should impact the report -->
 
 ---
-[//]: # TODO expand this slide deck? IT's short and could cover more. Also discuss how this should impact the report
 
 # Why isn't the World Already Accessible?
 
@@ -51,11 +51,6 @@ Testing accessibility is also hard!
 
 **Organizations impact accessibility**
 
-Designs have a big role in what is accessible
-
-Programmers also of course very important
-
-These days, a lot of it is created by end users
  
 ---
 # Who Creates Accessibility?
@@ -64,9 +59,6 @@ Organizations impact accessibility
 
 **Designs have a big role in what is accessible**
 
-Programmers also of course very important
-
-These days, a lot of it is created by end users
 
 ---
 # How might UX Designers address Accessibility
@@ -84,29 +76,47 @@ Organizations impact accessibility
 
 Designs have a big role in what is accessible
 
-**Programmers also of course very important**
+**Developers also of course very important**
 - Need to understand the expectations of APIs and accessibility technologies
 - Need to understand screen readers
 
-These days, a lot of it is created by end users
 
 ---
-[//]: # TODO fill in
-# What do Programmers already know about access?
+# How do practitioners enact accessibility in practice?
 
 [Accessibility in Software Practice](https://dl.acm.org/doi/pdf/10.1145/3503508)
 
-???
-Summarize this more...
+- data from 15 interviews and 365 survey respondents from 26 countries across five continents --> 120+ unique codes
+- followed up with a survey --> 44 statements grouped into eight topics on accessibility from practitioners’ viewpoints and different software development stages. 
+
 
 ---
-[//]: # TODO fill in
-# Organizational Issues
+# Organizational & People Challenges 
 
-<!-- --- -->
-<!-- # How might Designers address Accessibility -->
+.quote[Before making any decisions about “Accessibility”: stakeholders (e.g., designers, architects, developers, testers, and clients) in a project should
+reach a consensus on accessibility development and design]
 
-<!-- <iframe src="https://embed.polleverywhere.com/free_text_polls/sL5v5Ufo0sHFBmmC15MPV?controls=none&short_poll=true" width="800px" height="600px"></iframe> -->
+| Challenge                              | Recommendation                                     |
+|----------------------------------------|----------------------------------------------------|
+| Lack of resources                      | Long-term organizational buy-in and budget         |
+| Culture                                | Cooperative Culture                                |
+| Size (too small)                       | Work with customers & teams to prioritize access   |
+| Inadequate expertise & education       | Include accessibility expertise among team members |
+| Lack of QA to go with developer effort | Include accessibility on testing team              |
+
+---
+# Process Challenges (technical)
+
+Notice details of WCAG guidelines low on this list!
+
+| Challenge                                 | Recommendation                                       |
+|-------------------------------------------|------------------------------------------------------|
+| Unclear requirements & planning           | Include accessibility at all stages                  |
+| Unclear scope & architecture requirements | Engage with relevant end users                       |
+| Difficulty testing                        | Use appropriate testing suites & integration testing |
+| Lack of complete access practices         | Rigorous refactoring                                 |
+| Innapropriate tools                       | Well-designed documentation    & training            |
+| Domain-dependent issues                   | Appropriate end user engagement and testing                                                     |
 
 ---
 # Who Creates Accessibility?
@@ -115,11 +125,10 @@ Organizations impact accessibility
 
 Designs have a big role in what is accessible
 
-Programmers also of course very important
+Developers also of course very important
 
 **These days, a lot of it is created by end users**
 - This means that you have to think about *indirect* impacts on content creation too (i.e. what do you expose to end users in authoring tools)?
-- Will talk more about this next week, but crowdsourcing & online social networks part of this too
 
 ---
 [//]: # (Outline Slide)
@@ -133,7 +142,6 @@ Building for Accessibility
 - Structure impacts navigation order
 - Need to announce things that change
 
-
 ---
 # (On-desktop) screen reader interaction
 Three core  interaction patterns:
diff --git a/slides/comparing-approaches.html b/slides/comparing-approaches.html
index a91558cc3be09d60ff48dc078af81027129a0ebd..784aa03544efcaf51ef57c59e2303428b4a61381 100644
--- a/slides/comparing-approaches.html
+++ b/slides/comparing-approaches.html
@@ -1,6 +1,6 @@
 ---
 layout: presentation
-title: FOOBAR  --Week N--
+title: Comparing Assessment Techniques
 description: Accessibility
 class: middle, center, inverse
 ---
@@ -9,7 +9,7 @@ background-image: url(img/people.png)
 .left-column50[
 # Welcome to the Future of Access Technologies
 
-Week N, FOOBAR
+Comparing Assessment Techniques
 
 {{site.classnum}}, {{site.quarter}}
 ]
@@ -61,8 +61,7 @@ How do you get a system to the point where user testing is worth doing?
 
 [Is your web page accessible? A comparative study...](https://dl.acm.org/doi/10.1145/1054972.1054979)
 
-Gather baseline problem data on 4 sites
-- Usability study
+Gather baseline problem data on 4 sites (Usability Study)
 
 Test same sites with other techniques
 - Expert review with guidelines
@@ -101,11 +100,11 @@ AT the time, WCAG 1; Meeting WCAG priority 1 guidelines did not address all seve
 # Results -- Grocery
 
 .left-column50[
-![:img Picture of the front page of the albertson's website for ordering groceries online,100%, width](img/assessment/grocery.png) 
+![:img Picture of the front page of the albertson's website for ordering groceries online,80%, width](img/assessment/grocery.png) 
 ]
 
 .right-column50[
-![:img Picture of the grocery cart for the the albertson's website,100%, width](img/assessment/grocery2.png) 
+![:img Picture of the grocery cart for the the albertson's website,83%, width](img/assessment/grocery2.png) 
 ]
 ---
 # Results -- Grocery
@@ -124,7 +123,7 @@ Easiest site
 ]
 
 .right-column50[
-![:img Picture of the grocery cart for the the albertson's website,100%, width](img/assessment/grocery2.png) 
+![:img Picture of the grocery cart for the the albertson's website,83%, width](img/assessment/grocery2.png) 
 ]
 ---
 # Results -- Find Names
@@ -139,7 +138,7 @@ Easiest site
 ]
 
 .right-column50[
-![:img Picture of the list of graduate students in Berkeley's HCI group GUIR at the time of the study,100%, width](img/assessment/findnames.png) 
+![:img Picture of the list of graduate students in Berkeley's HCI group GUIR at the time of the study,80%, width](img/assessment/findnames.png) 
 ]
 
 ---
@@ -174,7 +173,7 @@ Most difficult site
 ]
 
 .right-column50[
-![:img Picture of a simple fake class registration form we made for the study,100%, width](img/assessment/registration.png) 
+![:img Picture of a simple fake class registration form we made for the study,80%, width](img/assessment/registration.png) 
 ]
 
 ---
@@ -243,32 +242,41 @@ No correlation between developer severity and WCAG priority or empirical severit
 
 ---
 # H1: Methods Don't Differ 
-.left-column50[
-- Screen reader and Expert Review found more problems
-]
-.right-column50[
-![:img same graph highlighting that the average reviewer only found less than 20% of problems,100%, width](img/assessment/graph2.png) 
+.left-column[
+Manual Review found lots of porblems
 ]
-???
+.right-column[
+<!-- <div class="mermaid"> -->
+<!-- pie title Problems Found by Condition -->
+<!--     "Dev. Review" : 8 -->
+<!--     "Guidelines Only" : 10 -->
+<!--     "Remote" : 9 -->
+<!-- </div> -->
 
-Note small differences between individual developers in finding problems
+![:img barchart showing that Dev. Review found 8 problems, Guidelines 10, and Remote 9,100%, width](img/assessment/IDCondition.png) 
 
-Difference between remote and screen reader group is significant
+]
 
 ---
 # H1: Methods Don't Differ 
-.left-column50[
-- Screen reader and Expert Review found more problems
-- Screen reader and Expert Review most valid
+.left-column[
+Manual Review as effective as remote screen reader users: % of problems reported in each condition that matched known problems
 ]
-.right-column50[
-![:img same graph highlighting that the average validity is 60% for remote BLV users; 20% for expert reviewers; and 40% for non-BLV screen reader users,100%, width](img/assessment/validity.png) 
+
+.right-column[
+![:img barchart showing that Dev. Review 78% valid, Guidelines 94% valid, and Remote 82% valid,80%, width](img/assessment/Validity.png) 
 ]
-???
 
-Note small differences between individual developers in finding problems
+---
+# H1: Methods Don't Differ 
+.left-column[
+Manual Review as effective as remote screen reader users: % of known accessibility problems found in each condition
+]
+
+.right-column[
+![:img barchart showing that Dev. Review 33% valid, Guidelines 41% valid, and Remote 26% valid,80%, width](img/assessment/Thoroughness.png) 
+]
 
-Difference between remote and screen reader group is significant
 
 ---
 # H2: Techniques find Different Problems
@@ -335,27 +343,27 @@ Many (perhaps all) of these are part of guidelines now
 - E8: Poor names
 - E9: Popups
 
----
-# H2: Techniques find Different Types of Problems
+<!-- --- -->
+<!-- # H2: Techniques find Different Types of Problems -->
 
-- High variance among individual reviewers
-- Screen reader novices did best at both major types of problems
+<!-- - High variance among individual reviewers -->
+<!-- - Screen reader novices did best at both major types of problems -->
 
-![:img Four bar charts each showing the cumulative benefit in terms of percentage of known problems found of adding evaluators. The first bar chart shows expert reviewers; who reach 30% of WCAG problems and 60% of empirical problems by the third evaluator. The second bar chart shows novice screen reader users who reach 60% of both empirical and WCAG problems by the fifth evaluator. The third bar chart shows remote BLV participants who reach 25% of WCAG and just under 20% of empirical problems by the fifth evaluator. The fourth bar chart shows an automated tool which finds about 25% of WCAG and 5% of empirical problems,100%, width](img/assessment/cumulative.png) 
+<!-- ![:img Four bar charts each showing the cumulative benefit in terms of percentage of known problems found of adding evaluators. The first bar chart shows expert reviewers; who reach 30% of WCAG problems and 60% of empirical problems by the third evaluator. The second bar chart shows novice screen reader users who reach 60% of both empirical and WCAG problems by the fifth evaluator. The third bar chart shows remote BLV participants who reach 25% of WCAG and just under 20% of empirical problems by the fifth evaluator. The fourth bar chart shows an automated tool which finds about 25% of WCAG and 5% of empirical problems,100%, width](img/assessment/cumulative.png)  -->
 
-???
-Explain chart
-also tracks heuristic eval literature: Five Evaluators find ~50% of Problems
-Individuals don't do well, but they  *differ*  from each other
+<!-- ??? -->
+<!-- Explain chart -->
+<!-- also tracks heuristic eval literature: Five Evaluators find ~50% of Problems -->
+<!-- Individuals don't do well, but they  *differ*  from each other -->
 
 ---
-# Discussion
+# Other findings
 
-Hyp 1: Screen reader most consistently effective
+<!-- Hyp 1: Screen reader most consistently effective -->
 
-Hyp 2: All but automated comparable
+<!-- Hyp 2: All but automated comparable -->
 
-- Screen missed only tables (w3); poor defaults (empirical)
+<!-- - Screen missed only tables (w3); poor defaults (empirical) -->
 
 Really need multiple evaluators
 
@@ -363,16 +371,16 @@ Remote technique needs improvement, could fare better
 
 Accessibility experience would probably change results
 
----
-# Discussion
+<!-- --- -->
+<!-- # Discussion -->
 
-Asymptotic testing needed
-- Can’t be sure we found all empirical problems
+<!-- Asymptotic testing needed -->
+<!-- - Can’t be sure we found all empirical problems -->
 
-Falsification testing needed
-- Are problems not in empirical data set really false positives?
+<!-- Falsification testing needed -->
+<!-- - Are problems not in empirical data set really false positives? -->
 
-More consistent problem reporting & comparison beneficial
+<!-- More consistent problem reporting & comparison beneficial -->
 
 Limitations
 - Web only
diff --git a/slides/img/assessment/IDCondition.pdf b/slides/img/assessment/IDCondition.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..2d6543c1ab2a850fd57a3a79f1a15286c1af1caa
Binary files /dev/null and b/slides/img/assessment/IDCondition.pdf differ
diff --git a/slides/img/assessment/IDCondition.png b/slides/img/assessment/IDCondition.png
new file mode 100644
index 0000000000000000000000000000000000000000..353eae6870345ee39c89b31d3ec8e46729c5bb6d
Binary files /dev/null and b/slides/img/assessment/IDCondition.png differ
diff --git a/slides/img/assessment/Thoroughness.pdf b/slides/img/assessment/Thoroughness.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..eb6b4adbf6692fc951005cb091872ea7386474b5
Binary files /dev/null and b/slides/img/assessment/Thoroughness.pdf differ
diff --git a/slides/img/assessment/Thoroughness.png b/slides/img/assessment/Thoroughness.png
new file mode 100644
index 0000000000000000000000000000000000000000..fd8c859ee6bbc7a9b76ae25a19e9989a390b2948
Binary files /dev/null and b/slides/img/assessment/Thoroughness.png differ
diff --git a/slides/img/assessment/Validity.pdf b/slides/img/assessment/Validity.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..92593e6cdcfb3bf4a0f518cd17a269086d3303db
Binary files /dev/null and b/slides/img/assessment/Validity.pdf differ
diff --git a/slides/img/assessment/Validity.png b/slides/img/assessment/Validity.png
new file mode 100644
index 0000000000000000000000000000000000000000..48a56b455280ddb3012ead144ccad32619da9458
Binary files /dev/null and b/slides/img/assessment/Validity.png differ
diff --git a/slides/img/assessment/validity.png b/slides/img/assessment/validity.png
deleted file mode 100644
index 96a4890cee1c479eadb5000383218e9dedea4a62..0000000000000000000000000000000000000000
Binary files a/slides/img/assessment/validity.png and /dev/null differ