Jekyll2024-01-18T18:33:11-06:00https://bambielli.com/feed.xmlBambielli’s BlogBambielli's blog: Part-time Teacher. Full-time Developer. Lifetime Learner.Brian Ambiellibrian.ambielli@gmail.comDenver Startup Week Ambassadors Program2018-09-30T04:48:00-05:002018-09-30T04:48:00-05:00https://bambielli.com/posts/denver-startup-week-ambassadors<p>Wow, what a week! I had the privilege of attending <a href="https://www.denverstartupweek.org/" target="_blank">Denver Startup Week (DSW)</a> as part of the second cohort of the <a href="https://www.denverstartupweek.org/initiatives/ambassadors" target="_blank">Ambassadors program</a>.</p>
<h2 id="what-is-the-ambassadors-program">What is the Ambassadors Program?</h2>
<p>The ambassadors program flies in founders, engineers, and business leaders from all over the country to tour Denver startups and get a feel for the city. Local startups sponsor the program so ambassadors don’t have to worry about costs for the week (which was amazing). Special shout out to Southwest airlines for covering the flights.</p>
<figure>
<img src="/assets/images/startup_week/startup-swag.jpg" />
<figcaption>Welcome Swag Bag for the Ambassadors</figcaption>
</figure>
<h2 id="dsw-programming">DSW Programming</h2>
<p>DSW really went out of their way to make us feel like VIPs during our visit. Our group had <strong>intimate conversations with executives</strong> from <a href="https://evolvevacationrental.com/" target="_blank">Evolve Vacation Rentals</a>, <a href="https://www.guildeducation.com/" target="_blank">Guild Education</a>, <a href="https://gusto.com/" target="_blank">Gusto</a>, and <a href="https://ibotta.com/" target="_blank">Ibotta</a>. We toured their offices, chatted with employees, and learned about some of the ways that Denver has played a part in their startups’ journeys.</p>
<p>At each office visit, I got a strong sense of community between the startups and their home city of Denver. The tech scene is growing quickly, and our hosts were looking for ways to contribute back. Denver seems like a great place to grow and learn, and it is definitely on my radar of places to consider moving to in the medium term.</p>
<h2 id="a-network-of-ambassadors">A Network of Ambassadors</h2>
<p>The other startup ambassadors in the program were also inspiring: I learned as much speaking with them as I did from attending DSW programming throughout the week. <strong>I now have an extended network of 50 technology leaders across the country</strong>, each coming from very diverse backgrounds and practices.</p>
<p>For example, I had a conversation with an interaction designer from Los Angeles who taught me about the <code class="language-plaintext highlighter-rouge">amplitude</code> product analytics platform and <code class="language-plaintext highlighter-rouge">segment.io</code> for analytics aggregation. I’m bringing these platforms back to Uptake as potential alternatives to google analytics, to make our analytics more meaningful.</p>
<p>I also had a long chat with a technology consultant from New York, who challenged my perspectives on consulting and gave me a rundown of the major players in the industry. Consulting doesn’t seem as stodgy as it had to me in the past, and it seems like a great way to learn best practices quickly.</p>
<p>These were just a few examples of the diverse perspectives that my co-ambassadors brought to Denver, and I walked away having grown as a technologist because of them.</p>
<h2 id="apply-for-2019">Apply for 2019!</h2>
<p>I feel privileged to have been given the opportunity to attend DSW and connect with outstanding members of startup, venture capital, design, consulting and business development communities across the country.</p>
<p>If you’d like to apply, feel free to reach out to me and I can connect you with the right folks. If you’ve never been to Denver, it’s absolutely worth the trip to check out this growing hub of innovation, and maybe work a hike or two in while you’re visiting.</p>Brian Ambiellibrian.ambielli@gmail.comWow, what a week! I had the privilege of attending Denver Startup Week (DSW) as part of the second cohort of the Ambassadors program.The Design Lifecycle2018-08-19T07:28:00-05:002018-08-19T07:28:00-05:00https://bambielli.com/posts/design-lifecycle<p>A topic covered in my Human Computer Interaction course was the <code class="language-plaintext highlighter-rouge">design lifecycle</code>. This process helps you to prioritize user needs, even though you may not know what those needs are, while prototyping ideas for a new interface.</p>
<figure>
<img src="/assets/images/design-lifecycle.jpg" />
<figcaption>Figure 1: The Design Lifecycle - diagram courtesy of GA Tech HCI course</figcaption>
</figure>
<p>The design lifecycle has four steps which form a feedback loop. This loop helps you learn about your users, and narrow in on <code class="language-plaintext highlighter-rouge">design alternatives</code> that serve users best for the task at hand.</p>
<p>Starting at the top:</p>
<h2 id="needfinding">Needfinding</h2>
<p>Needfinding focuses on <strong>learning information about your users</strong>: who they are, why they need to accomplish the task for which you are designing, and how they currently accomplish it.</p>
<p>Needfinding often involves real user participation via in person interviews or surveys, but can also be done by analyzing reviews of similar products (to understand gaps in other offerings) or through “naturalistic observation” where you observe users in the context of the task.</p>
<p>Needfinding aims to fill a <code class="language-plaintext highlighter-rouge">data inventory</code> that describes the who, what, where, and why questions about your users and their tasks. Each time you perform needfinding, you should be adding additional information to each of these inventory items so you get a clearer picture of your users over time.</p>
<p>The items in your data inventory are:</p>
<ol>
<li>Who are the users
<ul>
<li>what are their ages, genders, technical ability, etc…</li>
</ul>
</li>
<li>Where are the users
<ul>
<li>where do they exist physically?</li>
</ul>
</li>
<li>What is the context of the task
<ul>
<li>what else is competing for their attention?</li>
</ul>
</li>
<li>What are their goals
<ul>
<li>what are they trying to accomplish?</li>
</ul>
</li>
<li>What do they need
<ul>
<li>what are the physical objects, information do they need?</li>
</ul>
</li>
<li>What are their tasks
<ul>
<li>what are they doing physically, cognitively, socially?</li>
</ul>
</li>
<li>What are their subtasks
<ul>
<li>how do they accomplish those subtasks?</li>
</ul>
</li>
</ol>
<h2 id="design-alternatives">Design Alternatives</h2>
<p>After we learn about our users via needfinding, we take that information and start to <strong>brainstorm potential solutions</strong> to satisfy the problems of these users.</p>
<p>The goal of this step is to generate <em>lots</em> of ideas. No idea is too crazy, as every idea help us to explore more of the potential <code class="language-plaintext highlighter-rouge">design space</code> for the target task.</p>
<p>Brainstorming as a <code class="language-plaintext highlighter-rouge">group</code> is helpful, since different people will look at a problem from different perspectives, but it is often useful to perform a round of <code class="language-plaintext highlighter-rouge">individual brainstorming</code> so you don’t fall trap to some of the pitfalls of group brainstorming (like social loafing, conformity, production matching, and performance matching / biases).</p>
<p>I found it useful to use a digital tool like <a href="https://funretro.github.io/distributed/" target="_blank"><code class="language-plaintext highlighter-rouge">funretro.io</code></a> to keep track of ideas while brainstorming. This collaborative tool allowed multiple people to chime in with comments and new ideas, making for a more successful brainstorming session overall.</p>
<p>It is important to narrow your design alternatives down to the most viable ones by the time you exit brainstorming. Keep every idea that was generated, but choose a couple to focus on for the next stage of the design lifecycle.</p>
<h2 id="prototyping">Prototyping</h2>
<p>After deciding on a few design alternatives that came out of brainstorming, we move on to <code class="language-plaintext highlighter-rouge">prototyping</code> some of those ideas out.</p>
<p>Prototypes can take many forms, on a range from <em>lo-fi</em> (paper prototypes, text prototypes) to <em>hi-fi</em> (interactive wireframes).</p>
<p>The fidelity of a prototype elicits different responses from users who evaluate them:</p>
<ul>
<li>
<p>If an idea is still in its infancy, a lo-fi prototype is often more appropriate since it causes users to focus more on the mechanics of the task at hand instead of implementation details about the prototype.</p>
</li>
<li>
<p>If an idea is more fleshed out, and you’re looking for targeted feedback about the implementation of a design alternative, a hi-fi prototype is your best bet.</p>
</li>
</ul>
<p>Hi-fi prototypes are more expensive to build, so often it’s better to <strong>start with a lo-fi prototype</strong> to validate that the design alternative is sound before moving on to more targeted hi-fi feedback.</p>
<h2 id="evaluation">Evaluation</h2>
<p>After building a prototype for a design alternative, we move on to the evaluation phase where we <strong>get that prototype in the hands of some real users</strong> to get feedback and validate our ideas.</p>
<p>There are three types of evaluation that we can perform on our prototypes:</p>
<ol>
<li>
<p><strong>Qualitative Evaluation</strong> - This style of evaluation helps understand what users like and dislike about a design alternative by allowing them to express freely their thoughts and feelings. By performing interviews, giving surveys, conducting focus groups… we can understand what our users find easy, difficult, and confusing about a prototype. This type of evaluation is often conducted on lo-fi prototypes.</p>
</li>
<li>
<p><strong>Empirical Evaluation</strong> - This kind of evaluation generates quantitative data that help us validate via statistical methods whether our prototype is objectively “better” than alternatives. Quantitative studies are often more rigorously designed, but result in stronger conclusions about the performance of our design alternative. Empirical evaluation is often performed on hi-fi prototypes, since it would be difficult to collect high quality data on anything less.</p>
</li>
<li>
<p><strong>Predictive Evaluation</strong> - This evaluation method requires no additional user involvement. Instead, the evaluator (you) attempts to put themselves in the shoes of their users, and produce similar data to what a user might provide you. This type of evaluation is useful when we don’t have access to our target users, or if we want to iterate more quickly.</p>
</li>
</ol>
<h2 id="what-happens-next">What Happens Next?</h2>
<p>After conducting our evaluation, we decide if our data was conclusive enough to move our prototype through to full implementation. If we aren’t convinced, or if the data was inconclusive, we can iterate through the design lifecycle once again starting with needfinding.</p>
<p>All of the information we collected from the first round of the design lifecycle sticks with us, though, and informs us during the next iteration. For example, maybe we realized we need to know a bit more about the users’ context while they are performing the target task: we can target our needfinding exercise to elicit this type of information. This might prompt us to brainstorm design alternatives from a different angle, and come up with better prototypes that resonate more with our users.</p>
<p>It’s a virtuous cycle that brings us closer to an optimal interface that solves our users’ task with each loop.</p>Brian Ambiellibrian.ambielli@gmail.comA topic covered in my Human Computer Interaction course was the design lifecycle. This process helps you to prioritize user needs, even though you may not know what those needs are, while prototyping ideas for a new interface.AWS Summit Chicago Recap2018-08-06T07:28:00-05:002018-08-06T07:28:00-05:00https://bambielli.com/posts/aws-summit-chicago<p>Last Thursday was the AWS Summit Chicago. I attended 3 sessions on AWS Fargate, Canary Deployments with Istio, and AWS Sagemaker.</p>
<h2 id="session-1-aws-fargate">Session 1: AWS Fargate</h2>
<p><a href="https://aws.amazon.com/fargate/" target="_blank">Fargate</a> is a relatively new mode you can choose when you’re deploying containers to ECS or EKS. It removes the necessity to configure server specifications (nodes, memory, cpu), by creating pre-packaged configurations for you that are optimized for most workloads. This allows you to focus purely on your application code instead of on the infrastructure your containers will run on.</p>
<p>Contrast this with <code class="language-plaintext highlighter-rouge">EC2</code> mode, which requires that you specify server types, scaling options, and provision these in a way that you are not wasting your money. Many applications do not require this level of control.</p>
<p>For teams that are just starting out or are validating prototypes with users, <code class="language-plaintext highlighter-rouge">Fargate</code> mode seems to be a simple way to get your application deployed quickly in an efficient pre-configured way.</p>
<h2 id="session-2-canary-deployments-with-istio">Session 2: Canary Deployments with Istio</h2>
<p><a href="https://istio.io/" target="_blank">Istio</a> is a service mesh that is complimentary to <code class="language-plaintext highlighter-rouge">kubernetes</code>. It provides additional routing control, security standardization, and telemetry than kubernetes provides out of the box.</p>
<p>A <code class="language-plaintext highlighter-rouge">canary deployment</code> is a way of verifying that a new version of your application will perform well in a production environment, by directing a small amount of production traffic at the new version and collecting metrics on how it is performing compared to the old version. If anything goes wrong, traffic can be directed back to old versions that are still deployed to prod and customers shouldn’t experience any outages.</p>
<p>Contrast this to <code class="language-plaintext highlighter-rouge">rolling deployments</code> where service instances are slowly swapped over to a new service version one by one. This swap will proceed until all old service versions have been replaced. This doesn’t give you any opportunity to verify that the new service instances are working as expected under production load; if there winds up being any problems with a new version all customers will be affected since all old instances are spun down.</p>
<p>This session covered how one might perform canary deployments with vanilla kubernetes, and then additionally with istio:</p>
<p>With <code class="language-plaintext highlighter-rouge">kubernetes</code> alone, <strong>it is possible to achieve a percentage redirect of traffic to a new version by scaling the old and new application containers independently in your cluster</strong>. For example, if I wanted to redirect 20% of traffic to v2 and leave the remaining 80% directed to v1, I could scale my v1 containers to <code class="language-plaintext highlighter-rouge">8</code> and my v2 containers to <code class="language-plaintext highlighter-rouge">2</code>. This would effectively achieve an 80/20 split between the two versions.</p>
<p>Where this falls apart, though, is when you want to achieve something more fine grained (say 1% to a new version and 99% to another version). This would require you to spin up 99 containers of the old version, and one container of the new version in your cluster, which would likely result in a ton of idle compute time if your containers do not normally hover around this scale.</p>
<p>With <code class="language-plaintext highlighter-rouge">Istio</code>, which has more telemetry and capability around routing in your cluster, it is possible to specify percentage splits of traffic in the service mesh configuration. This allows your cluster to scale container instances as needed, instead of creating potentially unnecessary copies just to achieve a certain probability of routing to it.</p>
<p>A final thought from this session: it is important when designing a canary deployment, that you are casting a wide enough net over your traffic to ensure that all of your user types are contained in the canary group. For example, if you perform a canary deployment at midnight CST, but most of your users are sleeping at that time, you might not get a representative sample of your traffic directed at your canary. This can lead to a green light to fully roll out the canary in prod, only to realize after the fact that a serious performance issue appeared for a critical user type.</p>
<p>Ensuring that all user types are contained in a canary deploy test seems similar to designing an A/B test for a UI. Similarly, if you are designing a canary deploy for a UI related change, you would likely need to get more sophisticated than just a pure percentage traffic redirect to ensure users receive a consistent experience and are not randomly swapped between the old and new versions. This type of sophistication might not be necessary for backend changes.</p>
<h2 id="session-3-aws-sagemaker">Session 3: AWS Sagemaker</h2>
<p><a href="http://aws.amazon.com/sagemaker/" target="_blank">Sagemaker</a> is an Amazon offering that allows you to build, train, and deploy machine learning models in the AWS cloud. Sagemaker is framework agnostic, allowing you to build your models using any number of popular machine learning frameworks (Tensorflow, SKlearn, R, etc…).</p>
<p>Sagemaker offers a jupyter notebook environment for the development of models and scripts, that are hooked up to AWS cloud compute resources to get quick feedback as you build.</p>
<p>It also offers custom implementations of common machine learning algorithms that are optimized to run in the amazon cloud. Current offerings include: K-means clustering, PCA, LDA, and more… These custom implementations maximize the cost to performance ratio for your model training, allowing your models to train faster for less cost.</p>
<p>Sagemaker seems like an interesting way to build high performing machine learning algorithms, as it abstracts away much of the engineering challenges of the Build/Test/Deploy process, allowing you to focus more on model analysis where the bulk of the value for your customers lies.</p>Brian Ambiellibrian.ambielli@gmail.comLast Thursday was the AWS Summit Chicago. I attended 3 sessions on AWS Fargate, Canary Deployments with Istio, and AWS Sagemaker.15 Principles for Human Centered Design2018-08-06T07:28:00-05:002018-08-06T07:28:00-05:00https://bambielli.com/posts/fifteen-principles-for-human-centered-design<p>Today marked the last day of the Human Computer Interaction course I took this summer through my GT master’s program. Here’s a look at some user interface design principles that we touched on during lecture.</p>
<h2 id="design-principles">Design Principles</h2>
<p>There are 15 different principles that HCI researchers use to evaluate an interface. These principles were developed by Don Norman, Jakob Nielsen, Larry Constantine and Lucy Lockwood.</p>
<figure>
<img src="/assets/images/design-principles.jpg" />
<figcaption>Figure 1: 15 Design Principles - diagram courtesy of GA Tech HCI course</figcaption>
</figure>
<p><code class="language-plaintext highlighter-rouge">Discoverability</code> — Relevant interface functions should be made visible, instead of requiring a user to read about them in documentation. There is a natural tension between discoverability and simplicity.</p>
<p><code class="language-plaintext highlighter-rouge">Simplicity</code> - The interface is easy to understand and use, irrespective of a user’s experience, knowledge, or level of concentration. The interface is not cluttered with unnecessary information that distracts from accomplishing the primary task.</p>
<p><code class="language-plaintext highlighter-rouge">Affordances</code> - Interfaces that “hint at” the way they are meant to be used. The interface’s perceived affordance might be at odds with its actual affordance (e.g. a door with a handle seems like it should be pulled, but the door actually needs to be pushed). You can add signifiers to the interface to help a perceived affordance match the actual affordance (e.g. a label next to the door handle that says “push”).</p>
<p><code class="language-plaintext highlighter-rouge">Mapping</code> - Used in HCI to describe the relationship between the interface and real world equivalents. Interfaces should speak the language of the users who use it, favoring language in their terms vs system-oriented language. e.g. we use <code class="language-plaintext highlighter-rouge">cut</code>, <code class="language-plaintext highlighter-rouge">copy</code>, <code class="language-plaintext highlighter-rouge">paste</code> instead of <code class="language-plaintext highlighter-rouge">duplicate</code>, since this maps better to terms and actions that users already know.</p>
<p><code class="language-plaintext highlighter-rouge">Perceptibility</code> - The user’s ability to perceive the state of the system. Are they closer or farther away from accomplishing their goals? This is very important with digital systems, so users do not feel helpless when attempting to accomplish their tasks.</p>
<p><code class="language-plaintext highlighter-rouge">Consistency</code> - Design interfaces using familiar components which behave the same, so users do not need to re-learn your interface from scratch. Consistency is generally the best option, unless a design alternative provides a 10x improvement in usability.</p>
<p><code class="language-plaintext highlighter-rouge">Flexibility</code> - An interface should accommodate a wide range of users with varying levels of expertise. Allow users to use your interface in ways that fit with their standard workflows: e.g. some users are more comfortable copying and pasting using the right-click menu commands instead of keyboard shortcuts. Both accomplish the same task, but fit in to different user workflows.</p>
<p><code class="language-plaintext highlighter-rouge">Equity</code> - An interface is useable by users with diverse ranges of ability (accessibility).</p>
<p><code class="language-plaintext highlighter-rouge">Ease</code> - The design can be used with minimal amounts of fatigue.</p>
<p><code class="language-plaintext highlighter-rouge">Comfort</code> - Users of varying physical sizes, postures, mobility, can use the interface without strain.</p>
<p><code class="language-plaintext highlighter-rouge">Structure</code> - A user interface should be architected in a way that is organized and makes sense to the end user. e.g. information layout on a page is often made consistent with standards adopted from the newspaper industry.</p>
<p><code class="language-plaintext highlighter-rouge">Constraints</code> - Preventing a user from performing erroneously in the first place by constraining their possible behaviors. Password reset flows with client-side validations are a good example of this: they prevent the submit button from being made available until the user has successfully met password requirements.</p>
<p><code class="language-plaintext highlighter-rouge">Tolerance</code> - The user interface should be designed such that errors that inevitably occur do not cause too much setback for the user in accomplishing their primary task. Supporting standard functions like <code class="language-plaintext highlighter-rouge">undo</code> and <code class="language-plaintext highlighter-rouge">redo</code> give users a sense of security when using an interface, and make them more likely to engage and explore.</p>
<p><code class="language-plaintext highlighter-rouge">Feedback</code> - Users should receive clear and direct feedback in response to errors generated by operating the interface. Oftentimes, vague or confusing feedback is worse than no feedback at all, since it can be distracting, misleading, and anxiety provoking for users.</p>
<p><code class="language-plaintext highlighter-rouge">Documentation</code> - Some documentation is likely inevitable. This documentation should be built around use cases for tasks that the user wants to accomplish with your system, instead of describing every possible system function out of context.</p>
<h2 id="thats-a-lot-of-principles">That’s a lot of principles!</h2>
<p>There sure are! But there are small and important lessons to be learned from each of them.</p>
<p>Some of the most important takeaways for me are:</p>
<p>1) Consistency trumps originality - users prefer systems that feel similar to what they already know</p>
<p>2) Perceptibility of system state is an often discounted but very important aspect of system design</p>
<p>3) Constrain the user from taking bad actions in the first place</p>
<p>4) Simplicity is often at odds with discoverability: a crowded interface is no good, but key actions and information should not be buried.</p>Brian Ambiellibrian.ambielli@gmail.comToday marked the last day of the Human Computer Interaction course I took this summer through my GT master’s program. Here’s a look at some user interface design principles that we touched on during lecture.Comparison of Four Randomized Optimization Methods2018-07-22T06:13:00-05:002018-07-22T06:13:00-05:00https://bambielli.com/posts/comparison-of-four-randomized-optimization-methods<p>This post compares the performance of 4 different randomized optimization (RO) methods in the context of problems designed to highlight their strengths and weaknesses.</p>
<h2 id="randomized-optimization-methods">Randomized Optimization Methods</h2>
<p>The four RO methods explored were:</p>
<ul>
<li>
<p><a href="https://en.wikipedia.org/wiki/Hill_climbing" target="_blank"><code class="language-plaintext highlighter-rouge">Random Hill Climbing</code></a> - a standard hill climbing approach where optima are found by exploring a solution space and moving in the direction of increased fitness on each iteration.</p>
</li>
<li>
<p><a href="https://en.wikipedia.org/wiki/Simulated_annealing" target="_blank"><code class="language-plaintext highlighter-rouge">Simulated Annealing</code></a> - a variant on random hill climbing that focuses more on the exploration of a solution space, by randomly choosing sub-optimal next-steps with some probability. This increases the likelihood of finding global optima instead of getting stuck in local optima.</p>
</li>
<li>
<p><a href="https://en.wikipedia.org/wiki/Genetic_algorithm" target="_blank"><code class="language-plaintext highlighter-rouge">Genetic Algorithms</code></a> - a subset of evolutionary algorithms that produce new generations based on fitness of prior generations.</p>
</li>
<li>
<p><a href="https://www.cc.gatech.edu/~isbell/tutorials/mimic-tutorial.pdf" target="_blank"><code class="language-plaintext highlighter-rouge">MIMIC</code></a> - An RO approach created by professor Isbel of Georgia Tech, that attempts to exploit the underlying “structure” of a problem to eliminate re-exploration of sub-optimal portions of the solution space on future iterations.</p>
</li>
</ul>
<h2 id="problem-contexts">Problem Contexts</h2>
<p>The 3 problems I chose, which highlight the strengths and weaknesses of these algorithms, were:</p>
<ul>
<li>
<p><code class="language-plaintext highlighter-rouge">count ones</code> - a simple problem with a single global optima with large basin of attraction. SA and RHC should excel here, since their evaluation functions are inexpensive to compute and there are no local optima in which to get stuck.</p>
</li>
<li>
<p><a href="https://pdfs.semanticscholar.org/cd4f/e89d8dd6060e2957041f90fc699a30058d01.pdf" target="_blank"><code class="language-plaintext highlighter-rouge">four peaks</code></a> - A problem with two local optima with wide basins of attraction designed to catch simulated annealing and random hill climbing, and two sharp global optima at the edges of the problem space. Genetic Algorithms are more likely to find these global optima than other methods.</p>
</li>
<li>
<p><a href="https://en.wikipedia.org/wiki/Knapsack_problem" target="_blank"><code class="language-plaintext highlighter-rouge">knapsack</code></a> - a classic NP-Hard optimization problem with no polynomial time solution. The strength of MIMIC was highlighted in this context, as it exploited the underlying structure of the problem space that was learned from previous iterations.</p>
</li>
</ul>
<p>The implementations of these algorithms and problem scenarios were pulled from the <a href="https://abagail.readthedocs.io/en/latest/index.html" target="_blank">ABIGAIL</a> library, which is maintained by Pushkar Kohle of Georgia Tech.</p>
<h2 id="analysis">Analysis</h2>
<p><a href="/assets/pdf/Comparison-Of-Four-Randomized-Optimization-Methods.pdf" target="_blank">Click here</a> for the full paper with more detail and analysis, or view it below.</p>
<embed src="/assets/pdf/Comparison-Of-Four-Randomized-Optimization-Methods.pdf" />Brian Ambiellibrian.ambielli@gmail.comThis post compares the performance of 4 different randomized optimization (RO) methods in the context of problems designed to highlight their strengths and weaknesses.Tech Talk: HTTP Caching2018-05-13T03:30:00-05:002018-05-13T03:30:00-05:00https://bambielli.com/posts/http-caching<p>This week I prepared a presentation for Uptake’s front end community of practice on HTTP Caching.</p>
<p>The <a href="https://developer.uptake.com">Developer Portal</a> team at Uptake recently overhauled how we performed HTTP caching of static assets for the site.</p>
<h2 id="etags">etags</h2>
<p>We began with an <code class="language-plaintext highlighter-rouge">etag</code> based validation token strategy for static assets, which required that we validate the freshness of cached static files via HTTP requests to the server on each subsequent page refresh.</p>
<p><strong>This was a waste!</strong> Our static files rarely changed (particularly our vendor code and images) so performing these freshness checks just resulted in unnecessary chattiness with the webserver.</p>
<h2 id="chunkhashing-and-cache-contol">chunkhashing and cache-contol</h2>
<p>We ended up moving to a <code class="language-plaintext highlighter-rouge">webpack chunkhash</code> + <code class="language-plaintext highlighter-rouge">cache-control: max-age</code> strategy, which allowed us to cache our static assets in the browser <strong>indefinitely.</strong></p>
<p>A <a href="https://webpack.js.org/guides/caching/#output-filenames">chunkhash</a> acts in a similar way as an etag. It is generated based off of the content of your static file: in other words, <strong>a chunkhash will only change if the content of your file changes</strong>.</p>
<p>Adding a chunkhash to the names of your static assets allows you to cache indefinitely, as the cache will be busted with a new chunkhash the next time the contents change and a bundle with a brand new name is requested. Until the chunkhash changes, it’s ok for clients to continue using the cached version.</p>
<p>We wound up saving an average of around <strong>100ms</strong> per page load, and about <strong>2Kb</strong> of data over the the etag strategy. Both strategies were very easy to configure. There really isn’t any excuse NOT to be caching static assets!</p>
<p>Find the slides that I presented below. They were heavily inspired from <a href="https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/http-caching">this blog post</a> on HTTP caching by Ilya Grigorik.</p>
<p><a href="/assets/pdf/Http-Caching.pdf">If you are on mobile, click here for a PDF of the presentation.</a></p>
<embed src="/assets/pdf/Http-Caching.pdf" />Brian Ambiellibrian.ambielli@gmail.comThis week I prepared a presentation for Uptake’s front end community of practice on HTTP Caching.Applying Supervised Learning to Addiction Medicine Data2018-02-11T10:07:00-06:002018-02-11T10:07:00-06:00https://bambielli.com/posts/supervised-learning<p>I spent the last few weekends applying 5 different supervised learning models to an anonymized and labeled set of data representing individuals of an addiction & family medicine clinic in the Chicago area.</p>
<p>See my full analysis here: <a href="/assets/pdf/assignment-1-ml.pdf" target="_blank">Applying Supervised Learning to Addiction Medicine Data</a></p>
<h2 id="background">Background</h2>
<p>I am working with a former professor of mine from Northwestern, who volunteers at a local Chicago clinic as a tech consultant. He provided me with some (anonymized) data that represents demographic, background, and diagnosis information for Clients of the clinic. <strong>He’s hoping I can help uncover factors in the data that indicate whether a new client of the clinic is likely to complete their prescribed programming</strong>, as currently the graduation rate for the program hovers around 20%.</p>
<p>This partnership was well timed, as I was looking for an interesting data set to work with for the first assignment of my <a href="https://www.omscs.gatech.edu/cs-7641-machine-learning" target="_blank">Georgia Tech Machine Learning</a> class anyways!</p>
<h2 id="methodology">Methodology</h2>
<p>Assignment 1 for the class involved training <code class="language-plaintext highlighter-rouge">Decision Trees</code>, <code class="language-plaintext highlighter-rouge">Neural Networks</code>, <code class="language-plaintext highlighter-rouge">K-Nearest-Neighbor</code> (KNN) instance-based classifiers, <code class="language-plaintext highlighter-rouge">Boosted Decision Tree</code> ensemble learners, and <code class="language-plaintext highlighter-rouge">Support Vector Machines</code> on two structurally different data sets. After training the 5 different classifiers, we were to compare the performance of the classifier against each data set and additionally against each other model.</p>
<p>The first data view I used was left raw and relatively un-processed from the data provided by my professor, with a total of around 8000 sparse attributes after <code class="language-plaintext highlighter-rouge">One Hot Encoding</code> of categorical features. I derived a boolean column that represented whether the instance completed their programming, which I used as the classification label, but otherwise Data View 1 was raw.</p>
<p>I applied a cleaning and dimension reduction procedure to create the second data view, that <strong>reduced the number of dimensions in the clinic data from 8000 down to 118</strong>.</p>
<p>The difference in dimensionality between these two data sets was purposeful, as certain supervised learning methods perform better as the dimension space shrinks (e.g. KNN and the <code class="language-plaintext highlighter-rouge">curse of dimensionality</code>), while others can perform better with more information (e.g. neural networks). I was hoping to highlight these differences in my analysis.</p>
<p>I calculated learning curves for each trained model to get an estimate of how model accuracy changed with the number of training instances. I selected the best performing models from different parameter combinations using <code class="language-plaintext highlighter-rouge">k=5 folds cross-validation</code>, to ensure that the best performer wasn’t just performing well due to a chance selection of training and test sets.</p>
<p>Model performance was gauged based on accuracy in classifying the training set, which is a pure measure of the number of combined correct “True” and “False” classifications in the test data set.</p>
<h2 id="results">Results</h2>
<p>While the analysis was educational for myself, I don’t think this first round will be very useful for my professor and his ultimate question of “what factors influence the completion of prescribed clinic programming?”</p>
<p>Except for decision trees, the other models I trained are black-box in their results: they provide a trained classifier with some degree of accuracy, but don’t give the model developer any insight in to how classification decisions are made.</p>
<p>Even if you were to peek in to the hidden layers of a neural network, or get the equation of the function used to divide classes with Support Vector Machines, <strong>the values don’t provide any insight in to which attributes are ultimately the most impactful when making a classification decision.</strong></p>
<p>Regardless of the accuracy (and the accuracy was over 97% in some cases!) giving my professor a classifier without insight in to why the classifier makes its decisions does not help his cause.</p>
<p>Furthermore, I purposefully chose two very different views of the clinic data: one raw, and one with highly derived data. This was for the sake of comparing model performance on two structurally different data sets. <strong>There is absolutely a happy medium between these two views</strong> that removes noise without losing out on the nuance in the data, which would likely improve model performance over what I observed.</p>
<p>I have ideas on how to tune my cleaning procedure to do this, and now realize that <strong>it is often better to be conservative when pruning data than too aggressive.</strong></p>
<p>I’m optimistic that I’ll be able to find something interesting to help my professor’s cause, but this first round was mostly for my own education and for getting acquainted with the data itself. <strong>There are 3 more assignments for class this quarter</strong>, which will require me to look at the same data with different methods (some more supervised, but also unsupervised).</p>
<p>I also plan on running some more “white-box” supervised learning algorithms on the data, like <code class="language-plaintext highlighter-rouge">linear regression</code> and even decision trees again. I’m hoping this gives me a better sense of the most influential factors in making the “graduate” vs “not graduate” decision.</p>
<p>I also think something like <a href="https://en.wikipedia.org/wiki/F1_score" target="_blank">F1 score</a> would be a better choice to assess model performance instead of pure accuracy, since “False” instances dominate my data set (over 80% False).</p>
<p>If you’re interested in <strong>seeing the code</strong>, or have any questions about my experience in the Georgia Tech masters program thus far, feel free to reach out!</p>
<embed src="/assets/pdf/assignment-1-ml.pdf" />Brian Ambiellibrian.ambielli@gmail.comI spent the last few weekends applying 5 different supervised learning models to an anonymized and labeled set of data representing individuals of an addiction & family medicine clinic in the Chicago area.Holiday Hacks: Recipe Scaler Chrome Extension2018-01-07T05:17:00-06:002018-01-07T05:17:00-06:00https://bambielli.com/posts/holiday-hacks-recipe-scaler<p>Over the holidays, I created a <a href="https://www.github.com/bambielli/recipe-scaler/" target="_blank">chrome extension</a> that allows a user to dynamically adjust the number of servings for a recipe they are preparing.</p>
<p>The extension works for recipes on <a href="https://www.halfbakedharvest.com/" target="_blank">halfbakedharvest.com</a></p>
<p>The extension will scale ingredient amounts in the browser, based on the selected number of servings.</p>
<p>Scaling recipes has always been annoying to me… if I try to do it by hand or in my head, I will either not scale by the correct factors or I will forget to scale an ingredient while checking back with the recipe.</p>
<p>The recipe-scaler removes this point of friction, so you can focus on cooking instead of on (surprisingly tricky) math.</p>
<p>You can check out a <strong>live demo of the scaler in action here</strong>: <a href="http://www.bambielli.com/recipe-scaler/" target="_blank">www.bambielli.com/recipe-scaler/</a></p>
<p>And see the <strong>source code</strong> here: <a href="https://www.github.com/bambielli/recipe-scaler/" target="_blank">www.github.com/bambielli/recipe-scaler/</a></p>Brian Ambiellibrian.ambielli@gmail.comOver the holidays, I created a chrome extension that allows a user to dynamically adjust the number of servings for a recipe they are preparing.Writing Useful Tests for React Applications2017-10-02T16:59:00-05:002017-10-02T16:59:00-05:00https://bambielli.com/posts/useful-tests<p>My team at Expedia recently put some thought into “what makes a useful test”. Read on to hear our thoughts.</p>
<h2 id="the-golden-rule">The Golden Rule</h2>
<p>Our team arrived at a golden rule for testing: <strong>write tests that are useful.</strong></p>
<p>This seems like a pretty simple realization, but it has profoundly impacted how we write tests.</p>
<p>Previously we strived (strove?) to meet quantifiable testing metrics like 80% test coverage or 90% branch coverage… we found these metrics did more harm than good. <strong>We realized we were spending more time writing and maintaining our test suite than writing the application code itself.</strong></p>
<p>Something needed to change.</p>
<h2 id="what-makes-a-useful-test">What Makes a Useful Test</h2>
<p>We spent time thinking about what made a useful test. The 3 axioms we came up with are as follows:</p>
<ol>
<li>Tests that catch bugs before they reach prod are useful.</li>
<li>Tests that point developers to the source of a bug are useful.</li>
<li>Tests that are easy to write and maintain are useful.</li>
</ol>
<p>We are now constantly questioning our test suite as we make changes to application code. If we think a particular test isn’t useful anymore, we delete it. This ensures our test suite is always lean.</p>
<h2 id="presentation">Presentation</h2>
<p>Find the slides from our presentation below.</p>
<p><a href="/assets/pdf/useful-tests-for-react-applications.pdf">If you are on mobile, click here for a PDF of the presentation.</a></p>
<embed src="/assets/pdf/useful-tests-for-react-applications.pdf" />Brian Ambiellibrian.ambielli@gmail.comMy team at Expedia recently put some thought into “what makes a useful test”. Read on to hear our thoughts.Design Patterns: Adapter and Facade2017-09-09T06:35:00-05:002017-09-09T06:35:00-05:00https://bambielli.com/posts/the-adapter-pattern<p>Last week, as part of Expedia Learniversity, I gave a presentation on the Adapter and Facade design patterns.</p>
<h2 id="adapter-interface-with-anything">Adapter: Interface with Anything</h2>
<p>The Adapter pattern is useful when you are trying to integrate components of your system that have incompatible interfaces. They perform a conversion of a target interface (the interface a client expects) to the adaptee’s interface.</p>
<p>The canonical example of an adapter is the <strong>US plug to Euro socket adapter.</strong> This adapter accepts a US plug (client) and provides (implements) a US socket interface for it to integrate with (target interface). The adapter exposes a euro plug on the other side, which can be plugged in to any euro socket (the adaptee).</p>
<p>Internally the adapter performs necessary conversions between what a device with a US plug would expect from its power source, and what a euro socket provides. <strong>This additional conversion work is invisible to the client</strong>, as it just sees a US socket interface that it can integrate with, and nothing more.</p>
<p>Speaking in software terms, an adapter class <strong>implements the target interface that the client expects</strong>, and <strong>composes an instance of the adaptee object</strong> which it uses internally to perform conversion from target to adaptee.</p>
<p>See the following implementation of an adapter that implements a java Iterator interface as its target, and composes with a java Enumeration object as its adaptee.</p>
<figure class="highlight"><pre><code class="language-java" data-lang="java"><span class="kn">import</span> <span class="nn">java.util.Enumeration</span><span class="o">;</span>
<span class="kn">import</span> <span class="nn">java.util.Iterator</span><span class="o">;</span>
<span class="kd">public</span> <span class="kd">class</span> <span class="nc">IteratorToEnumerationAdapter</span> <span class="kd">implements</span> <span class="nc">Iterator</span> <span class="o">{</span>
<span class="nc">Enumeration</span> <span class="n">enumeration</span><span class="o">;</span>
<span class="nc">IteratorToEnumerationAdapter</span><span class="o">(</span><span class="nc">Enumeration</span> <span class="n">enumeration</span><span class="o">)</span> <span class="o">{</span>
<span class="k">this</span><span class="o">.</span><span class="na">enumeration</span> <span class="o">=</span> <span class="n">enumeration</span><span class="o">;</span>
<span class="o">}</span>
<span class="nd">@Override</span>
<span class="kd">public</span> <span class="kt">boolean</span> <span class="nf">hasNext</span><span class="o">()</span> <span class="o">{</span>
<span class="k">return</span> <span class="n">enumeration</span><span class="o">.</span><span class="na">hasMoreElements</span><span class="o">();</span>
<span class="o">}</span>
<span class="nd">@Override</span>
<span class="kd">public</span> <span class="nc">Object</span> <span class="nf">next</span><span class="o">()</span> <span class="o">{</span>
<span class="k">return</span> <span class="n">enumeration</span><span class="o">.</span><span class="na">nextElement</span><span class="o">();</span>
<span class="o">}</span>
<span class="nd">@Override</span>
<span class="kd">public</span> <span class="kt">void</span> <span class="nf">remove</span><span class="o">()</span> <span class="o">{</span>
<span class="k">throw</span> <span class="k">new</span> <span class="nf">UnsupportedOperationException</span><span class="o">(</span><span class="s">"Remove doesn't exist"</span><span class="o">);</span>
<span class="o">}</span>
<span class="o">}</span></code></pre></figure>
<p>By wrapping an instance of Enumeration, our client code can use this adapter to interface with Enumerations as if they were Iterators! Since the adapter implements Iterator, it is also of Iterator type! This means the adapter can be used in place of Iterator anywhere in our client code where Iterator is expected.</p>
<p>Notice that the Iterator interface exposes a 3rd method <code class="language-plaintext highlighter-rouge">remove()</code> that is unsupported by an enumeration object. An option in this scenario might be to implement the missing behavior in our adapter class. This somewhat deviates from the definition of Adapter, though, since <strong>a pure adapter is just supposed to convert from one interface to another without adding any additional behavior.</strong></p>
<p>In the example above, we choose to throw an <code class="language-plaintext highlighter-rouge">UnsupportedOperationException</code>, which indicates to our client that the Iterator they are interfacing does not support <code class="language-plaintext highlighter-rouge">remove()</code>.</p>
<h2 id="facade-simplify-your-subsystem-interfaces">Facade: Simplify your Subsystem Interfaces</h2>
<p>Consider a typical home theater system: it contains many different components like Screens, Lights, Receivers, Projectors, and maybe even popcorn poppers.</p>
<p>When a client comes along and wants to watch a movie, <strong>they need to know both the order and the operations to perform on each component of the subsystem to achieve their goal</strong>. For many, this can be a daunting task, and gets even worse when the home theater nerd starts upgrading components that expose different methods and <strong>break other clients understandings of how the system works.</strong></p>
<p>Facade to the rescue: a Facade is a simplified interface that sits on top of a system, that exposes methods necessary for clients to achieve their goals. In the case of the home theater, a universal remote acts as a facade. Movie watching clients can use the simplified interface that the universal remote provides, push one button, and have all of the necessary components turn on in the correct way.</p>
<p>The facade <strong>does not encapsulate subsystem components away from clients</strong>. The nerds can still come in and have full control over the different components of the home theater system, if they so choose.</p>
<p>An advantage of facade is that <strong>it decouples clients from system components</strong>. This makes upgrades easier to perform: when a component changes underneath the facade, it should make sure to update the facade appropriately. This will ensure that clients get the same experience regardless of the underlying component architecture.</p>
<h2 id="presentation">Presentation</h2>
<p>I gave the following presentation on Adapter and Facade at our weekly Learniversity session at work. <a href="/assets/pdf/Adapter-Presentation-09-09-17.pdf">If you are on mobile, click here for a PDF of the presentation.</a></p>
<embed src="/assets/pdf/Adapter-Presentation-09-09-17.pdf" />Brian Ambiellibrian.ambielli@gmail.comLast week, as part of Expedia Learniversity, I gave a presentation on the Adapter and Facade design patterns.