Free Download Designing with Data: Improving the User Experience with A/B Testing
Free Download Designing with Data: Improving the User Experience with A/B Testing
The reason of many individuals chooses this Designing With Data: Improving The User Experience With A/B Testing as the reference exposes due to the needs in this day. We have some particular means how the books exist. Beginning with words choices, linked topic, and easy-carried language design, exactly how the writer makes this Designing With Data: Improving The User Experience With A/B Testing is really easy. Yet, it includes the professional that could influence you much easier.

Designing with Data: Improving the User Experience with A/B Testing
Free Download Designing with Data: Improving the User Experience with A/B Testing
Seeing the library every day might not become your design. You have many works as well as activities to do. However, you need to look for some analysis publications, from literary to the national politics? Exactly what will you do? Preferring to acquire the book in some cases when you are socializing with good friends to guide store is suitable. You can browse and find guide as you like. However, exactly what concerning your referred book is not there? Will you walk around again and do look and find any more? Sometimes, lots of people will certainly be so lazy to do it.
Reading publication Designing With Data: Improving The User Experience With A/B Testing, nowadays, will not require you to constantly buy in the store off-line. There is an excellent area to acquire the book Designing With Data: Improving The User Experience With A/B Testing by on-line. This web site is the very best website with great deals numbers of book collections. As this Designing With Data: Improving The User Experience With A/B Testing will be in this book, all books that you need will certainly be right here, too. Simply hunt for the name or title of the book Designing With Data: Improving The User Experience With A/B Testing You can discover just what you are hunting for.
To prove just how this publication will influence you to be better, you could begin checking out by now. You might likewise have actually understood the writer of this publication. This is a very impressive book that was composed by specialist writer. So, you may not really feel doubt of Designing With Data: Improving The User Experience With A/B Testing From the title and also the writer additionaled the cover, you will make certain to review it. Also this is a simple book, the web content is very crucial. It will not have to make you feel dizzy after reviewing.
When somebody could deliver the existence of this publication, you could get this book as soon as possible. It will not require lot of times, again. It will certainly offer you reduce ways. This best offered publication from the very best author truly pertains to bone of wanted as well as wanted book to inspire. Designing With Data: Improving The User Experience With A/B Testing as the new book could join this globe effectively. And now, follow us to obtain this impressive publication.
About the Author
Rochelle King is Global VP of Design and User Experience at Spotify where she is responsible for the teams that oversee user research and craft the product experience at Spotify. Prior to Spotify, Rochelle was VP of User Experience and Product Services at Netflix. where she managed the Design, Enhanced Content, Content Marketing, and Localization teams at Netflix. Collectively, these groups were responsible for the UI, layout, meta-data (editorial and visual assets), and presentation of the Netflix service internationally across all platforms. Rochelle has over 14 years of experience working on consumer-facing products. You can find her on Twitter @rochelleking.Dr. Elizabeth Churchill is a Director of User Experience at Google. Her work focuses on the connected ecosystems of the Social Web and Internet of Things. For two decades, Elizabeth has been a research leader at well-known corporate R&D organizations including Fuji Xerox's research lab in Silicon Valley (FXPAL), the Palo Alto Research Center (PARC), eBay Research Labs in San Jose, and Yahoo! in Santa Clara, California.Elizabeth has contributed groundbreaking research in a number of areas, publishing over 100 peer reviewed articles, coediting 5 books in HCI-related fields, contributing as a regular columnist for the Association of Computing Machinery's (ACM) Interactions magazine since 2008, and publishing an academic textbook, Foundations for Designing User-Centered Systems. She has also launched successful products, and has more than 50 patents granted or pending.Caitlin Tan is a User Researcher at Spotify, and a recent graduate from MIT.
Read more
Product details
Paperback: 370 pages
Publisher: O'Reilly Media; 1 edition (April 20, 2017)
Language: English
ISBN-10: 1449334830
ISBN-13: 978-1449334833
Product Dimensions:
6 x 1 x 9.2 inches
Shipping Weight: 1.2 pounds (View shipping rates and policies)
Average Customer Review:
4.2 out of 5 stars
9 customer reviews
Amazon Best Sellers Rank:
#72,432 in Books (See Top 100 in Books)
The strength of this book is that it's written for designers, a group that sometimes considers A/B testing as "competing," with the creative process. The authors point out the complementary value and call the "genius designer" a myth. The weakness of the book is that the statistics are wrong at times, which may mislead readers.I have been using A/B tests and more sophisticated controlled experiments for over a decade, including leading the ExP Platform at Microsoft, which is used to run over 12,000 experiment treatments/year, Some of my work is referenced in this book, so please take this review in the appropriate context.Here are some key points I loved:• Great observations, such as "[Ensure] you’re running meaningful learning tests rather than relying on A/B testing as a “crutch†— that is, where you to stop thinking carefully and critically about your overarching goal( s) and run tests blindly, just because you can.• Nice quotations from multiple people doing A/B testing in the industry• Good observations about insensitive metrics such as NPS, which take "significant change in experience and a long time to change what users think about a company." Another example, which is even more extreme, is stock price. You could run experiments and watch the stock ticker. Good luck with that insensitive metric.• Good observation about metrics that "can't fail," such as clicks on a feature that didn't exist.• Netflix found "a very strong correlation between viewing hours and retention....used viewing hours (or content consumption) as their strongest proxy metric for retention."Coming up with short-term metrics predictive of long-term success is one of the hardest things.• "Deviating significantly from your existing experience requires more resources and effort than making small iterations.â€â€¢ For those who "worry that A/B testing and using data in the design process might stifle creativity.... generating a large variety of different hypotheses prior to designing forces you and your team to be more creative."Amen• Nice references to Dan McKinley's observations that most features are killed for lack of usage, and that unexciting features, such as "emails to people who gave up in the midst of a purchase had much bigger potential impact to the business."• "…changing something about the algorithm that increases response speed (e.g., content download on mobile devices or in getting search results); users see the same thing but the experience is more responsive, and feels smoother. Although these performance variables aren’t “visible†to the user and may not be part of visual design, these variables strongly influence the user experience."Great point about the importance of performance and the fact that this cannot be measured in prototypes or sketches. We ran multiple "slowdown" experiments to measure the value of perf.• Interesting discussion on the “Painted door†tests and the point that it's a questionable test that misleads users. It's also unable to measure a key metric: repeat usage: once you slam into the painted door, you know not to do it again.• Nice concept of "Experiment 0," the experiment I might run before the one being planned.• "inconclusive result doesn’t mean that you didn’t learn anything. You might have learned that the behavior you were targeting is in fact not as impactful as you were hoping for."• An important point to remember "When analyzing and interpreting your results, remember that A/ B testing shows you behaviors but not why they occurred."• “There is a difference between using data to make and inform decisions in one part of an organization versus having it be universally embraced by the entire organization.â€â€¢ "One could believe that a designer or product person who doesn’t know the right answer must not have enough experience. Actually it’s almost inversely true. Because I have some experience, I know that we don’t know the right answer until we test."• "steer people away from using phrases like 'my idea is ...' and toward saying 'my hypothesis is...'"• "one of the most important aspects of experimental work is triangulating with other sources and types of data."• The book addresses ethics, rarely discussedHere are some things I didn’t like:• The book is verbose. I read the electronic version, but the paperback is 370 pages, giving a sense of the size.• Very few surprising "eye opening" examples. Several of the papers on exp-platform, such as the Rules of Thumb paper, and the Sept-Oct 2017 HBR article on experimentation have surprising examples showing the humbling value of A/B testing. Th A/B Testing book by Siroker and Koomen have great examples.• The authors fall into a common pitfall of misinterpreting p-values. For example, they write o "a p-value helps quantify the probability of seeing differences observed in the data of your experiment simply by chance."But p-value is a conditional probability, assuming the null (no difference). o "p = 0.05 or less to be statistically significant. This means we have 95% confidence in our result."This is wrong. P-value is conditioned on the Null hypothesis being true o "A false positive is when you conclude that there is a difference between groups based on a test, when in fact there is no difference in the world.... This means that 5% of the time, we will have a false positive." Wrong again. o "around 1 in 20 A/ B experiments will result in a false positive, and therefore a false learning! Worse yet, with every new treatment you add, your error rate will increase by another 5% so with 4 additional treatments, your error rate could be as high as 25%." Both halves are wrong. p-value of 0.05 does not equate to 5% false positive rate, and adding treatments does not linearly add 5%; it's 1-0.95^4 = 18% o "But getting a p-value below that twice due to chance has a probability of much less than 1% — about 1 in every 400 times."1/400 assumes you can multiply the two p-values. Need to use Fisher's combined probability test (meta-analysis)• "Sometimes, one metric is constrained by another. If you’re trying to evaluate your hypotheses on the basis of app open rate and app download rate, for instance, app download rate is the upper bound for app open rate because you must download the app in order to open it. This means that app open rate will require a bigger sample to measure, and you’ll need to have at least that big of a sample in your test."The idea that constrained metrics require larger samples is wrong as phrased. Triggering to smaller populations is highly beneficial in practice. For example, if you make a change to the checkout process, analyze only users who started checking out. While the sample size is smaller, the average treatment effect is larger. Including users who provably have a zero treatment effect is always bad.• Larger companies with many active users generally roll out an A/ B test to 1% or less, because they can afford to keep the experimental group small while still collecting a large enough sample in a reasonable amount of time without underpowering their experiment.â€This is the 1% fallacy. Large companies want to be able to detect small differences. If Bing doesn't detect a 0.5% degradation to revenue in a US test, it might not realize the idea is going to lose $15M/year. The experiment must be sufficiently powered to detect small degradations in high-variance metrics like revenue that we care about. Most Bing experiments run at 10-20%, after an initial canary test at 0.5%• "It’s always great to see the results you hoped to see!" The value of an A/B test is the delta between expected and actual results. Some of the best examples are ones where the results are MUCH BETTER than what was expected.• "if you run too many experiments concurrently, you risk having user exposed to multiple variables at the same time, creating experimental confounds such that you will not be able to tell which one is having an impact on behavior, ultimately muddying your results."You can test for interactions. Bing, Booking.com, Facebook, Google, all run hundreds of concurrent experiments. This is a (mostly) solved problem.Thanks, Ron Kohavi
Feels very repetitive by the middle of the book. The quality of the content is lost when points aren't made succinctly.
Great book and content.
I cut my teeth on mail-order marketing, what they now call direct-response. Might even be called something else now. One of the cardinal rules of mail-order marketing was – and remains – test, test, test. You tested everything. The headline, the copy, the color of the paper, its weight, everything until you had tested enough determine the most efficient marketing package. This book’s primary authors are eminently qualified and highly experienced. Even better, they are graceful writers. The authors define “A/B testing [as] a methodology to compare two or more versions of an experience to see which one performs the best relative to some objective measureâ€. In other words, you test to find out what works best. The book is intended to acquaint designers and product managers in launching digital products using data to guide the product’s refinement. In other words, the book how to show designers and product managers how to use the wealth of data available to better market their product. Over the course of the first six chapters, they do precisely that. This stuff is really good. The authors, one with Spotify in her background, the other with Netflix, truly understand the concept, mechanics and worth of testing. The last two chapters smelled too much like political correctness for my taste and, in my opinion could have been left out without harming the value of the book. If you are not thoroughly experienced with the concept of A/B testing in marketing vehicles, you will benefit from this book.Jerry
QUICK SUMMARY: Probably not a good book to start with if you're new to A/B testing.BACKGROUND: I work in the IT industry as a project manager, so I don't work as closely now with the software development teams or QA teams like I used to. I got this book to learn more about A/B testing just for general knowledge on the topic, and this book does seem written by authoritative authors. But after getting about one-half the way through this 300+ page book, I kinda lost gusto to finish it. That's not a knock on the book's authors or their work, but just me saying that if you're wanting an introduction into A/B testing, this book does offer that, but I just didn't find it written in an engaging style for a non-designer IT worker.The other issue I had were a few of the graphics. They are much too small to be readable in a book this size. I'm of the opinion that either you right-size the graphics so they render well on a printed page, or you just don't include them. Adding graphics that are too small to easily read just aren't worth incorporating into a book. Not all the graphics were too small to read (most were okay, in fact), but the ones that were too small ought to have been redesigned, zoomed-in, or otherwise dealt with. Granted, this is a petty complaint that doesn't deal with the content of this book.
Designing with Data: Improving the User Experience with A/B Testing PDF
Designing with Data: Improving the User Experience with A/B Testing EPub
Designing with Data: Improving the User Experience with A/B Testing Doc
Designing with Data: Improving the User Experience with A/B Testing iBooks
Designing with Data: Improving the User Experience with A/B Testing rtf
Designing with Data: Improving the User Experience with A/B Testing Mobipocket
Designing with Data: Improving the User Experience with A/B Testing Kindle
0 comments:
Post a Comment