Cocone Xenon: Beyond3 Pre-Launch Strategy
Project Summary: Led creative direction and UX strategies for the social app’s launch. Designed onboarding flows, user experience, and research strategy.
Onboarding screens mockups

















Pre-Launch Metrics
- Concept Validation
- Ideation from 0-1
- MVP Build
- User-flow Charts
Product Deck
Available Upon request
MI: Mycelium Intelligence “Exploring AI and Nature's Network."
June 2024 - April 2025
This installation showcases the parallels between mycelium networks and artificial intelligence, emphasizing connectivity, adaptability, and collaboration. Surrounded by the ambient sounds of these earthy-growing connections, participants interact with generative visuals projected on sinewave panels and bricks made from myco-foam, a material grown from mycelium. These panels serve as both visual displays and audio insulators, while the bricks demonstrate eco-sustainable building possibilities.
![]()
![]()
![]()
![]()







SAGE
JAN - MAY 2021
JAN - MAY 2021
Sage is an app that helps home cooks efficiently find ingredients in their city, including necessary information, and usage tips.
Case Study
ROLE Designer, Researcher, Ideation, Presenter, Result analysis
TOOLS Figma, UsabilityHub
PROCESSES
Design Thinking
Product Design
UX/UI Design
User Research
Personas
A/B testing
ADA
Design Thinking
Product Design
UX/UI Design
User Research
Personas
A/B testing
ADA
Pitch deck
Scroll down for the full case study
Case Study
Objective
Ingredient Finding Mobile App
Outcome
Clickable Prototype
Process
![]()
Interviews![]()
![]()
![]()
Personas

Design
MVP Features Matrix

Userflow Chart

Early sketches | Wireframes


A/B Testing
Usability Hub Tests
I tested a series of our wireframes with usability hub and asked the users to:
Version 1.0.0
Version 1.0.1
Testing results
-
navigate the wireframes
-
complete the task of looking up the availability of ingredient “ajwain seeds” in the area.
Version 1.0.0



Version 1.0.1



Testing results
- 80% of our user pool falls into the age range 20–40. This reflects our personas and target user age group, but we still included a few users from an older age group to consider these other perspectives.
- The results offered us a few interesting discoveries. The majority of the users are able to navigate through the frames until the ingredient page, where they have to click on the drop-pin icon to find the ingredient on the map. Only 57% of the users on that page successfully completed this step. This suggests that perhaps the icon itself is not clear enough to communicate its function and that perhaps we would need texts to accompany the icon or a separate button with texts.
- This has also brought to our attention that perhaps a shortcut button to the map search should be included on the home page, potentially in place of the “drop a pin” button. In addition, we also realized that the burger menu on the home page could also cause confusion because its function is unclear. It almost serves as a stand-in button-for-all at this moment. In our next iteration, we could consider replacing the burger menu with a user profile button. More iterations and tests are needed for us to fully determine the changes in this area.
Version 2.0
Info page List to share




Version 2.1
Onboarding






User testing synthesis

Testing results ︎︎︎
After reviewing the testing results, we decided on the following as the guiding principles for our design
Design Principles
Accessibility,
Aesthetic-
Confirmation & Consent,
Usability Effect,
Hierarchy of NeedsAesthetic Principles
Highlighting (such as making a button green to be pressed), Iconic Representation (icons across the page for similar indication of ingredient, map, home, back…)Information Display Principles
Progressive Disclosure (For the ingredient page) Feedback Loop (adding info on map + reusing it in the ingredients search)
Chunking (combining many units of information into a limited number of units or chunks, so that the information is easier to process and remember. ie: recipes, ingredients, cuisines)
Reflection
One of the challenges that have surfaced during the creation of our wireframes has been how best to approach (or avoid) design iteration without a clear brand identity. There is a balance to be struck between the focus on usability at this stage, and the need for certain basic design elements that can be used to inform the user in important ways through emphasis, de-emphasis, color coding, imagery, iconography, etc. It is rather easy to get hung up on even the most simple methods for indicating the brand mark (“why is it a circle?”), color hierarchy (“why green?”, “lets stay away from red”), tints, drop shadows, etc. Without a clearly defined brand identity, these decisions risk taking on a lot of undue significance and attention when it appears that our primary concern at the wireframe stage is with how well our product functions.
As designers, we always strive for the perfect combination of form and function (except when we don’t), but this is easier said than done, and so this feels like an important inflection point.
A Park of the Future
SEP - DEC 2021
SEP - DEC 2021
A Park of the Future is a semi-speculative park of an imagined future reality of an existing park in Brooklyn, New York. This park has been created as an immersive experience in VR, built in Unity and some of the elements have been designed & 3D modeled in SketchUp.
ROLE Designer, Experience Designer, XR Designer, Unity Developer, Researcher
TOOLS
Unity, SketchUp, Figma, Slides
Unity, SketchUp, Figma, Slides
Objective
XR for the Real World - Park Redesign for VR
Creating a digital immersive environment









Outcome
Process
Re-designing Thomas Greene Park, Brooklyn - New York
The site is the Thomas greene park in Gowanus. It is a neighbourhood park that’s meant to be a haven for the community.Objective
Redesigned the park putting the community at the forefront.Research
Site visit









Inspiration





















Brainstorming
Sketching


3D Modeling
Development
Machine Learning Stunning Afterglows
FEB-MAY 2021
FEB-MAY 2021
Image Machine Learning training using StyleGan model. Python coding & Tensorflow






Overview
I’m working with my own photos of “sunsets” which I’ve taken over the years. These photos have been taken in several countries and regions in the world. Sunset time is special to me and it gives me hope of a day gone and a new day coming. It’s also very dreamy and romantic to see how the sky’s colors change. Most of the sunsets in NYC are beautiful and the colors are stunning especially when it turns pink but this is only because of pollution. The more the air pollution, the more the sky turns pink when the sun is setting.I hope to use the machine learning from the colors of the sky at sunset time to show variations of patterns in relation to pollution, and also use these patterns to predict more beautiful sunset skies.
Concept Exploration
- Air index quality affects afterglow colors in sunsets especially in populated and dense areas, where air pollution is higher.
- Comparing and contrasting between lesser density areas and afterglow colors.
- Leveraging this tool for creating unique stunning sunset backdrops for design.
Dataset
1st dataset: 1470 photos, 629 frames extracted from 2 videos using opencv [by Aarati Akkapeddi https://colab.research.google.com/drive/1WWHNG4YqGSHfIYIUrC2tmoPQ3HOgU--e]
I have 2 videos which I transformed to images to add to the dataset. In total I planned to have around 2000 images for lengthy periods of training.
This dataset included photos taken in the U.S (several states), Lebanon, Spain, France, Turkey
Resized to 256x256
All in jpg format
2nd dataset: 787 frames, taken in New York City on June 25, 2019
extracted from 2 videos using opencv [by Aarati Akkapeddi]
Resized to 256x256
All in jpg format
Training Process
- I trained dataset 1 for 20 hours
- Downloaded pkl files
- Interpolation
- Generated a new dataset with generate.py code and selected names it dataset 2
- I started re-training on the same worksheet, trained dataset 2 for 16 hours
- Interpolation
- Then used the last pickle file from dataset 1 training (000266.pkl) to train it with dataset 2 images. (cross pollination). Used the learning from the last dataset onto the latest training.
- Interpolation
- Downloaded pckl file 000048 from the last combined training (14th).
Outcome
Seeds from training dataset 1
Seeds from training dataset 2
Noticeable Outcomes
- The Fine Styles — Color Scheme is preserved in outputs 1, 2, 3
- There's adaptive normalization in 1 & 2
- Comparison between the sunset output in Lebanon (mostly) and NY. Less dense areas were definitely lighter in sunset colors. While pink and orange were mostly in new york sunsets since the air is more polluted.
- StyleGAN has generated realistic images
- Slight loss in style and content in training 3
Further Development
- I WANT TO INTERPOLATE USING DIFFERENT INPUTS AND MANIPULATE OUTPUTS USING EXISTING LEARNING TOOLS
- EXPERIMENT INTERACTIVITY THROUGH PROCESSING - P5.JS
- I AM DESIGNING SUNSETS, EACH DESIGN IS UNIQUE AND YOU CAN OWN IT
Applications to look into
- Image Classification
Upload a picture, and the model tries to classify it depending on what it “sees” on the picture. This model uses transfer learning and is based on MobilenetV2.
- Style transfer
Blending image style adding these sunset images as backdrops. Using image style to create a sunset.
- Learning and experimenting with deep-learning challenges and approaches for sky images
First images sample
Process
I have a primary 870 images dataset. I began training with a portion of this set and hope to add all of the images to it.I used the styleGAN training code and ran into a couple of errors in the first 3 trials after I finally was able to get an output.
The code is still running for a better outcome and I’ll upload more outputs in the upcoming week.
Early stage outcome
We can notice the clarity of the images becoming better with time & getting closer from looking real.
The below image looks interesting. Since most of the time the sky is positioned upwards and it seems like there’s a typical way we see sunsets, yet here it’s kind of rotated. I’m searching for more interesting findings; in general the colors and shapes seem close to the reality of true sunsets.
Bibliography
A. Implementation and Concept Research
I- What dust and pollution don't do
It is often written that natural and manmade dust and pollution cause colorful sunrises and sunsets. Indeed, the brilliant twilight "afterglows" that follow major volcanic eruptions owe their existence to the ejection of small particles high into the atmosphere (more will be said on this a bit later). If, however, it were strictly true that low-level dust and haze were responsible for brilliant sunsets, cities such as New York, Los Angeles, London, and Mexico City would be celebrated for their twilight hues. The truth is is that tropospheric aerosols --- when present in abundance in the lower atmosphere as they often are over urban and continental areas --- do not enhance sky colors --- they subdue them. Clean air is, in fact, the main ingredient common to brightly colored sunrises and sunsets.
Afterglow colors are affected by smoke, air quality, and other factors. The stunning hues when affected by smoke are mostly pinkish
Source: https://www.spc.noaa.gov/publications/corfidi/sunset/#:~:text=Typical%20pollution%20droplets%20such%20as,are%20on%20the%20order%20of%20.&text=Similarly%2C%20the%20vibrant%20oranges%20and,more%20than%20soften%20sky%20colors.
II- Machine Learning which sunsets are considered beautiful from social media data
Source:
- https://twitter.com/Senor_Sunset
- Luminar: Sky replacement tool - AI Powered Tool - picture editor
- Adobe photoshop https://www.theverge.com/2020/9/21/21449124/photoshop-sky-replacement-tool-ai-machine-learning
- SunsetWx - Sunset and sunrise forecasts. Sunset & Sunrise Predictions: Model using an in-depth algorithm comprised of meteorological factors. https://sunsetwx.com/
- https://twitter.com/sunset_wx?ref_src=twsrc%5Etfw%7Ctwcamp%5Eembeddedtimeline%7Ctwterm%5Eprofile%3Asunset_wx&ref_url=https%3A%2F%2Fsunsetwx.com%2F
- Sunsets, Fraternities, and Deep Learning http://obsessionwithregression.blogspot.com/2016/05/sunsets-fraternities-and-deep-learning.html
Image-to-Image Translation with Conditional Adversarial Networks https://arxiv.org/abs/1611.07004
B. Air Quality Index - Pollution Levels NYC
Air quality index in NYC 5 and ozone at current concentrations in New York City. Health Department estimates show that each year, PM2. 5 pollution in New York City causes more than 3,000 deaths, 2,000 hospital admissions for lung and heart conditions, and approximately 6,000 emergency department visits for asthma in children and adults.
- Reference: https://aqicn.org/map/brooklyn/
Get in touch for more info & resources on this project






Click the link below
View Projects︎
https://www.arabnet.me/
https://youtube.com/playlist?list=PLyaIpDYnKGpl3NH3XtY9v5TvTuihuGaIA
https://www.youtube.com/watch?v=Zn7A6hyzID8
https://www.arabnet.me/english/editorials
https://www.yaleaders.org/event/arabnet-digital-summit-6th-edition/
https://www.crunchbase.com/event/arabnet-digital-summit-2015-2015527
https://www.youtube.com/watch?v=XUDL7UkImLo
https://www.arabnet.me/english/editorials/events/dubai-to-host-arabnet-digital-summit-this-june
https://www.arabnet.me/english/editorials/Events/ArabNet-Developer-Tournament-Videos
https://www.arabnet.me/english/ecosystem/Events/ArabNet-Summit
https://www.arabnet.me/english/editorials/business/industry/meet-all-the-winners-from-arabnet-riyadh-2016-
Riyadh 2013 - 2016 https://www.arabnet.me/english/conference/connecting-the-kingdom
Beirut 2012 - 2016 https://www.arabnet.me/english/conference/beirut
Summit Dubai 2013 - 2016
In the media
https://www.khaleejtimes.com/article/smart-dubai-joins-arabnet-for-digital-summit
https://www.wamda.com/en/2017/05/arabnet-dubai-promotes-investments-menas-emerging-markets
https://www.entrepreneur.com/en-ae/technology/arabnet-digital-summit-2016-in-dubai-to-focus-on-menas/273826
https://www.wamda.com/2016/04/arabnet-digital-summit-dubai-2016
https://www.naharnet.com/stories/en/101477
https://t3me.com/en/features/arabnet-digital-summit-2016-day-one-highlights/
https://issuu.com/arabnetquarterly/docs/arabnet_the_quarterly_issue_9__summ
https://www.youtube.com/watch?v=6LhyKpkdlck
https://www.opportunitiesforafricans.com/5-reasons-to-attend-arabnet-digital-summit-2016-in-dubai/
http://www.cnegypt.com/2016/05/arabnet-partners-with-smart-dubai-to.html
https://www.facebook.com/watch/?v=1797853140468270
https://www.arabianbusiness.com/gallery/arabnet-digital-summit-2016-in-pictures-638464
https://www.wamda.com/en/2017/05/arabnet-dubai-promotes-investments-menas-emerging-markets
https://www.slideshare.net/ArabNetME/ondevice-research-consumer-habits-awareness-of-mgovernment-smart-cities-arabnet-digital-summit-2016
Game of Life
APR-MAY 2021
APR-MAY 2021
Game of Life is a 3 levels 8-bit game. Developed using Processing in p5.js and object oriented programming using
Javascript.
Javascript.
Filed under:
Tag-4, Tag-5, Tag-6, Tag-7
Tag-4, Tag-5, Tag-6, Tag-7

