(READ THIS ENTIRELY. WE WILL KNOW IF YOU HAVE NOT READ IT. DO NOT SPAM US. FOLLOW THE INSTRUCTIONS EXACTLY IF YOU WANT TO BE CONSIDERED)
We’re not just looking for an expert, WE’RE LOOKING FOR A PARTNER TO JOIN US AND OWN PART OF THE COMPANY!
We are a real estate investment firm about to revolutionize the entire real estate space and are on a search for someone who has significant skills and expertise in:
1. Google cloud infrastructure
2. Teaching AI to perform tasks such as:
a. Teaching it to scrape court websites for data and compiling it into spreadsheets
b. Teaching it how to read through the pdf’s found in individual court cases and reasoning through what each document might indicate for the case based on what our experts teach you to teach the AI
c. Teaching AI how to perform complex and intricate tasks like performing a title search on a property or reviewing a foreclosure case to provide all of its history as well as current status (our experts will teach you so you can teach AI.
3. Ability to get around anti-scraping tools that large websites deploy so their data can be scraped (nothing illegal)
4. Ability to get AI to work around CAPTCHA and adapt to new CAPTCHA as they are replaced/reworked or at least report there is a problem they need help with
5. Ability to write custom APIs
We’re looking for someone to start as a contractor but we have hopes to eventually move on to you being a partner with us and receiving equity in the company so you can own part of the company and receive long term residuals as well.
Here’s some of the specific things we need. This is not an exhaustive or complete list, these are the immediate needs but there are many other things needed, and more and more to come as we grow:
CUSTOM API DEVLEOPMENT
All of the items you see below need to be able to connect back to our primary database which is a custom based solution built in FileMaker Pro server running on a Linux server. It’s not required that you understand FileMaker Pro other than to study how it integrates with custom APIs, which is not a huge learning curve. FilemakerPro plays well with many common architectures.
Each item below is going to require you to write and create a separate custom API so the data can make its way into our database. We have no interest in pre-built solutions for API’s like Zapier, etc… The only exception would be where the outside API is the only way to achieve the goal and it’s simply not possible to write one in house. It’s important to us, as much as is humanly possible, that we own ALL of the work running behind the scenes.
ZILLOW SCRAPING
We need to be able to scrape Zillow’s data , most especially the Zestimate and the Rent Zestimate. This may seem like a simple task at first blush, but Zillow deploys POWERFUL anti-scraping tools so this one is going to require some thought.
It should be noted that over time Zillow modifies and updates their anti-scraping tools, so you have to build this in a very intelligent way where it can either figure it’s own way around the changes or report to us that there’s an issue which you would then need to resolve.
This needs to be a scalable solution that can be scaled to millions of scrapes if necessary.
COUNTY COURT, OFFICIAL RECORDS & PROPERTY APPRAISER DATA SCRAPING & ANALYIZING
This first piece of this one tends to be not too difficult, just teaching the AI where the county websites are, how to access them and what data to pull.
This data then needs to be placed in a spreadsheet for safekeeping but that’s not its final destination.
The data must then be reasoned through and analyzed in many different ways, too many to discuss here. But our experts will teach you what to teach the AI and how it should reason through things.
Storage: All documents (almost always pdf’s), regardless of their source, must then be stored on some type of cloud storage for later access by us or the system itself. Also, links should be stored for any documents where the county allows a direct, publicly accessible and clickable link.
One of the important issues you’ll need to work around with county websites are the CAPTCHAs. Most of them use them and you’ll need to find ways around them or teach the AI how to analyze and solve them.
TYPES OF COUNTY DATA & ANALYZATION
- STARTING WITH 4 COUNTIES - what you see below will first be deployed and tested in 4 counties, then we will go nationwide when we have proof of concept.
- ARCHAIC/INDIVIDUALIZED SYSTEMS - Before reading what you see below regarding what type of data and analyzations will be needed, it’s important to understand that there are two major steps during county data pulls:
STEP 1 - going to the county website and pulling the data (ALWAYS different from county to county)
STEP 2 - analyzing the data that’s been pulled (ALWAYS the same from county to county)
Every county website is different from the next, and the data headings are sometimes called different things on different websites. Also important to note is that county websites tend to be archaic and old, many of them still running on a 1990s looking interface. This means that though step 2 will only need to be created once, step 1 must be custom made for each individual county, one at a time.
- TITLE SEARCHES - a title search is a very complex and detailed process, and requires LOTS of human logic and reasoning. There is no simple way to just pull data and make a decision. Many documents need to be searched out, analyzed and compared with one another along with various people, places and events to come up with a set of title search results. Our human experts can teach you what to teach the AI, but you have to be proficient enough to be able to understand this yourself so you can properly teach the AI what to look for and how to analyze it.
- COURT CASE STATUS - Court cases, such as foreclosure, have several main phases and sub phases they go through. So the first step is to teach the AI to tell us which main phases are complete, this is relatively easy. Then the AI needs to delve into the documents in the case, one by one, and analyze the sub phases of each main phase and tell us which sub phases are complete. This sub phase determinization will require much deeper levels of understanding and analyzation by the AI. Our experts will teach you what to teach the AI.
- PROPERTY APPRAISER DATA - The AI will need to be taught how to find a given property on the property appraiser website and pull critical data like beds, baths, square footage, legal description etc..
- UPDATING PRIOR DATA - For records we already have in the database, we’ll need the ability to trigger an API that updates the data. For instance: let’s say we have a property that was previously put in our system by the AI you create, and it's been in our database for 6 months and nothing’s been updated on the current Zillow value or foreclosure status. We need to be able to trigger an API that updates all of that info to the current amounts and data so the record is then updated to all current info WITHOUT adding back old data that we already have.
ADDRESS MAPPING BACK TO ZILLOW
One of the biggest issues is address mapping in a way that Zillow can understand what you’re looking for. For instance: if a property is located at 123 Main Court SE, it can be written many ways:
- 123 Main CT. SE
- 123 Main Court SE
- 123 Main Court SouthEast
- 123 Main Court South East
- 123 Main CT Southeast
Etc...
Most addresses are fairly straight forward, but plenty of them have this weird quirk where it can be done many ways.
And if the wrong address goes into the Zillow scrape you either get wrong data or nothing.
So we need you to figure out how to teach the AI to verify it’s looking at the right data when it’s scraping data. This may very well require some google maps integration.
COMPLETE MANUALS AS YOU WORK
At every step of the way you need to be publishing a manual of every single piece of your work, live for us to see. It needs to be exhaustively detailed and something that, should anything happen to you, we can hire someone else to pick up right where you left off. What you’re reading now is a random instruction that has nothing do with the project itself, purposely inserted randomly into the scope of work so that it’s not easily discovered. This helps us determine who is actually reading the entire scope and who is just skimming and spamming in their response. Please make sure to start your reply to us with “Macs are unix based”. That needs to be at the very top of your reply application. If it’s not there your submission will be ignored.
OWNERSHIP
We will own 100% of the work you do, at all times, without exception. However, if you are offered some portion of company ownership then you’ll gain some portion of the work back as an owner as well as receive residual income as we sell the final product.
ACCOUNTS & ACCOUNT OWNERSHIP
Much of the work you do in the cloud will require accounts (such as google accounts, etc..). When this happens you are to alert us and we will create all the necessary accounts. These should NEVER be in your name or created by you. It is never ok for you to create them and then give us the user name and password. They must ALWAYS be accounts we create, own, control and have all passwords and user names. Equally important: NEVER USE YOUR OWN ACCOUNTS FOR ANYTHING. For instance: let’s say there’s a service you already subscribe to that is needed for the project. You are NEVER to use your account and put our info, data or coding or anything related to our projects into your own accounts at any time for any reason, even if just for testing or to be “kind”. If you need access to something that requires account creation, you must ask us to create the account and then give you access to it.
NEVER USE YOUR OWN CREDIT CARD
In the past we have had some coders who are working on a project and need to access some google api feature that requires payment and so they just put their card in temporarily so they can keep working, with the intent to have us change it later to ours. Though it is a kind gesture, this is NEVER allowed. Do not EVER use your own money, or cards or make payments for anything on our behalf. Alert us to the service and we will make the payments and add our own cards.
DO NOT put anything on your own credit cards or into your own accounts. EVER. FOR ANY REASON.
BONUS, BUT NOT REQUIRED:
If you happen to be a Linux pro MAKE SURE TO TELL US THIS! Not required, but super awesome if you are.
HOW TO APPLY:
If you want our attention you only have to do a few things, divided into 3 paragraphs.
YOUR APPLICATION *MUST* BE UNIQUE TO US!
*DO NOT* send us the same copy and paste spam you send to everyone. We will IMMEDIATELY DELETE it.
Follow the instructions:
PARAGRAPH 1:
Prove to us you have what it takes by briefly listing for us the talents, skills and history in this field that you have.
PARAGRAPH 2:
Explain why you think this job is right for you & why you’re the right one for it
PARAGRAPH 3:
Explain why you think this company is the right company to work for
We NEVER read resumes or CR’s at first. We only read the 3 paragraphs initially. So make them count.
That’s it. THOSE are the things that get our attention.
DO NOT SEND:
1. DO NOT send us a resume
2. DO NOT send us a long job history
3. DO NOT send us a copy and paste spam that you send to everyone
REMEMBER: MAKE YOUR RESPONSE UNIQUE!
THANKS, GOOD LUCK AND GOD SPEED!
If you enjoy using Freelance Autopilot, we would really appreciate your testimonial. Send it in here.
To stop receiving notifications. Go to https://freelanceautopilot.com/searches and disable or delete your tracked searches.
(READ THIS ENTIRELY. WE WILL KNOW IF YOU HAVE NOT READ IT. DO NOT SPAM US. FOLLOW THE INSTRUCTIONS EXACTLY IF YOU WANT TO BE CONSIDERED)
We’re not just looking for an expert, WE’RE LOOKING FOR A PARTNER TO JOIN US AND OWN PART OF THE COMPANY!
We are a real estate investment firm about to revolutionize the entire real estate space and are on a search for someone who has significant skills and expertise in:
1. Google cloud infrastructure
2. Teaching AI to perform tasks such as:
a. Teaching it to scrape court websites for data and compiling it into spreadsheets
b. Teaching it how to read through the pdf’s found in individual court cases and reasoning through what each document might indicate for the case based on what our experts teach you to teach the AI
c. Teaching AI how to perform complex and intricate tasks like performing a title search on a property or reviewing a foreclosure case to provide all of its history as well as current status (our experts will teach you so you can teach AI.
3. Ability to get around anti-scraping tools that large websites deploy so their data can be scraped (nothing illegal)
4. Ability to get AI to work around CAPTCHA and adapt to new CAPTCHA as they are replaced/reworked or at least report there is a problem they need help with
5. Ability to write custom APIs
We’re looking for someone to start as a contractor but we have hopes to eventually move on to you being a partner with us and receiving equity in the company so you can own part of the company and receive long term residuals as well.
Here’s some of the specific things we need. This is not an exhaustive or complete list, these are the immediate needs but there are many other things needed, and more and more to come as we grow:
CUSTOM API DEVLEOPMENT
All of the items you see below need to be able to connect back to our primary database which is a custom based solution built in FileMaker Pro server running on a Linux server. It’s not required that you understand FileMaker Pro other than to study how it integrates with custom APIs, which is not a huge learning curve. FilemakerPro plays well with many common architectures.
Each item below is going to require you to write and create a separate custom API so the data can make its way into our database. We have no interest in pre-built solutions for API’s like Zapier, etc… The only exception would be where the outside API is the only way to achieve the goal and it’s simply not possible to write one in house. It’s important to us, as much as is humanly possible, that we own ALL of the work running behind the scenes.
ZILLOW SCRAPING
We need to be able to scrape Zillow’s data , most especially the Zestimate and the Rent Zestimate. This may seem like a simple task at first blush, but Zillow deploys POWERFUL anti-scraping tools so this one is going to require some thought.
It should be noted that over time Zillow modifies and updates their anti-scraping tools, so you have to build this in a very intelligent way where it can either figure it’s own way around the changes or report to us that there’s an issue which you would then need to resolve.
This needs to be a scalable solution that can be scaled to millions of scrapes if necessary.
COUNTY COURT, OFFICIAL RECORDS & PROPERTY APPRAISER DATA SCRAPING & ANALYIZING
This first piece of this one tends to be not too difficult, just teaching the AI where the county websites are, how to access them and what data to pull.
This data then needs to be placed in a spreadsheet for safekeeping but that’s not its final destination.
The data must then be reasoned through and analyzed in many different ways, too many to discuss here. But our experts will teach you what to teach the AI and how it should reason through things.
Storage: All documents (almost always pdf’s), regardless of their source, must then be stored on some type of cloud storage for later access by us or the system itself. Also, links should be stored for any documents where the county allows a direct, publicly accessible and clickable link.
One of the important issues you’ll need to work around with county websites are the CAPTCHAs. Most of them use them and you’ll need to find ways around them or teach the AI how to analyze and solve them.
TYPES OF COUNTY DATA & ANALYZATION
- STARTING WITH 4 COUNTIES - what you see below will first be deployed and tested in 4 counties, then we will go nationwide when we have proof of concept.
- ARCHAIC/INDIVIDUALIZED SYSTEMS - Before reading what you see below regarding what type of data and analyzations will be needed, it’s important to understand that there are two major steps during county data pulls:
STEP 1 - going to the county website and pulling the data (ALWAYS different from county to county)
STEP 2 - analyzing the data that’s been pulled (ALWAYS the same from county to county)
Every county website is different from the next, and the data headings are sometimes called different things on different websites. Also important to note is that county websites tend to be archaic and old, many of them still running on a 1990s looking interface. This means that though step 2 will only need to be created once, step 1 must be custom made for each individual county, one at a time.
- TITLE SEARCHES - a title search is a very complex and detailed process, and requires LOTS of human logic and reasoning. There is no simple way to just pull data and make a decision. Many documents need to be searched out, analyzed and compared with one another along with various people, places and events to come up with a set of title search results. Our human experts can teach you what to teach the AI, but you have to be proficient enough to be able to understand this yourself so you can properly teach the AI what to look for and how to analyze it.
- COURT CASE STATUS - Court cases, such as foreclosure, have several main phases and sub phases they go through. So the first step is to teach the AI to tell us which main phases are complete, this is relatively easy. Then the AI needs to delve into the documents in the case, one by one, and analyze the sub phases of each main phase and tell us which sub phases are complete. This sub phase determinization will require much deeper levels of understanding and analyzation by the AI. Our experts will teach you what to teach the AI.
- PROPERTY APPRAISER DATA - The AI will need to be taught how to find a given property on the property appraiser website and pull critical data like beds, baths, square footage, legal description etc..
- UPDATING PRIOR DATA - For records we already have in the database, we’ll need the ability to trigger an API that updates the data. For instance: let’s say we have a property that was previously put in our system by the AI you create, and it's been in our database for 6 months and nothing’s been updated on the current Zillow value or foreclosure status. We need to be able to trigger an API that updates all of that info to the current amounts and data so the record is then updated to all current info WITHOUT adding back old data that we already have.
ADDRESS MAPPING BACK TO ZILLOW
One of the biggest issues is address mapping in a way that Zillow can understand what you’re looking for. For instance: if a property is located at 123 Main Court SE, it can be written many ways:
- 123 Main CT. SE
- 123 Main Court SE
- 123 Main Court SouthEast
- 123 Main Court South East
- 123 Main CT Southeast
Etc...
Most addresses are fairly straight forward, but plenty of them have this weird quirk where it can be done many ways.
And if the wrong address goes into the Zillow scrape you either get wrong data or nothing.
So we need you to figure out how to teach the AI to verify it’s looking at the right data when it’s scraping data. This may very well require some google maps integration.
COMPLETE MANUALS AS YOU WORK
At every step of the way you need to be publishing a manual of every single piece of your work, live for us to see. It needs to be exhaustively detailed and something that, should anything happen to you, we can hire someone else to pick up right where you left off. What you’re reading now is a random instruction that has nothing do with the project itself, purposely inserted randomly into the scope of work so that it’s not easily discovered. This helps us determine who is actually reading the entire scope and who is just skimming and spamming in their response. Please make sure to start your reply to us with “Macs are unix based”. That needs to be at the very top of your reply application. If it’s not there your submission will be ignored.
OWNERSHIP
We will own 100% of the work you do, at all times, without exception. However, if you are offered some portion of company ownership then you’ll gain some portion of the work back as an owner as well as receive residual income as we sell the final product.
ACCOUNTS & ACCOUNT OWNERSHIP
Much of the work you do in the cloud will require accounts (such as google accounts, etc..). When this happens you are to alert us and we will create all the necessary accounts. These should NEVER be in your name or created by you. It is never ok for you to create them and then give us the user name and password. They must ALWAYS be accounts we create, own, control and have all passwords and user names. Equally important: NEVER USE YOUR OWN ACCOUNTS FOR ANYTHING. For instance: let’s say there’s a service you already subscribe to that is needed for the project. You are NEVER to use your account and put our info, data or coding or anything related to our projects into your own accounts at any time for any reason, even if just for testing or to be “kind”. If you need access to something that requires account creation, you must ask us to create the account and then give you access to it.
NEVER USE YOUR OWN CREDIT CARD
In the past we have had some coders who are working on a project and need to access some google api feature that requires payment and so they just put their card in temporarily so they can keep working, with the intent to have us change it later to ours. Though it is a kind gesture, this is NEVER allowed. Do not EVER use your own money, or cards or make payments for anything on our behalf. Alert us to the service and we will make the payments and add our own cards.
DO NOT put anything on your own credit cards or into your own accounts. EVER. FOR ANY REASON.
BONUS, BUT NOT REQUIRED:
If you happen to be a Linux pro MAKE SURE TO TELL US THIS! Not required, but super awesome if you are.
HOW TO APPLY:
If you want our attention you only have to do a few things, divided into 3 paragraphs.
YOUR APPLICATION *MUST* BE UNIQUE TO US!
*DO NOT* send us the same copy and paste spam you send to everyone. We will IMMEDIATELY DELETE it.
Follow the instructions:
PARAGRAPH 1:
Prove to us you have what it takes by briefly listing for us the talents, skills and history in this field that you have.
PARAGRAPH 2:
Explain why you think this job is right for you & why you’re the right one for it
PARAGRAPH 3:
Explain why you think this company is the right company to work for
We NEVER read resumes or CR’s at first. We only read the 3 paragraphs initially. So make them count.
That’s it. THOSE are the things that get our attention.
DO NOT SEND:
1. DO NOT send us a resume
2. DO NOT send us a long job history
3. DO NOT send us a copy and paste spam that you send to everyone
REMEMBER: MAKE YOUR RESPONSE UNIQUE!
THANKS, GOOD LUCK AND GOD SPEED!
If you enjoy using Freelance Autopilot, we would really appreciate your testimonial. Send it in here.
To stop receiving notifications. Go to https://freelanceautopilot.com/searches and disable or delete your tracked searches.
We’re looking for an n8n expert to architect and build an end-to-end workflow that transforms raw lead data into personalized email outreach using AI, then pushes those emails through a third-party sending service.
What You’ll Do:
Design and implement n8n workflows to ingest, clean, and normalize lead data
Integrate AI models (e.g. OpenAI) to enrich leads with industry, company, and persona insights
Generate tailored subject lines and email copy via structured LLM prompts
Connect to an external email API (e.g. Instantly.ai) to deliver campaigns
Ensure robust error handling, logging, and easy credential configuration
Must-Have Skills:
Deep hands-on experience with n8n (core nodes, JSON handling, expressions)
JavaScript proficiency for custom code nodes
Familiarity with RESTful APIs and authentication (header auth, JSON payloads)
Experience integrating LLMs or similar AI services in automation
Understanding of email marketing best practices and personalization
Nice to Have:
Prior work with email delivery platforms (e.g. Instantly, SendGrid, Mailgun)
Exposure to marketing automation concepts
To Apply:
Please share:
A brief overview of relevant n8n/automation projects
Your approach to integrating AI for content generation
Availability and rate estimate
We’ll move quickly—if this sounds like you, let’s talk!
Really just need help with the full n8n build and push to instantly via api. (2) separate projects with similar deliverables that need to happen ASAP. I can handle all the drip campaign once into the email service provider.
If you enjoy using Freelance Autopilot, we would really appreciate your testimonial. Send it in here.
To stop receiving notifications. Go to https://freelanceautopilot.com/searches and disable or delete your tracked searches.
We are seeking talented and enthusiastic developers to join our team in building cutting-edge AI products. You'll collaborate directly with our experienced founders and architects to create next-generation AI-powered applications, primarily focused on chatbot development, backend APIs, and integrating machine learning solutions..
What You'll Do:
Develop and refine backend APIs using Python frameworks (FastAPI preferred).
Implement AI integrations using OpenAI, MLflow, or Metaflow.
Collaborate closely with our team to translate product requirements into technical solutions.
Deploy and manage cloud-based solutions (AWS, Azure, or GCP).
Required Skills:
Strong programming fundamentals (Python experience strongly preferred).
Exposure or willingness to learn ML/AI frameworks (MLflow, Metaflow).
Familiarity with cloud platforms (AWS, Azure, GCP).
Excellent communication and problem-solving abilities.
Ideal Candidate:
Has previous experience or a genuine passion for AI chatbot development.
Is proactive, able to independently resolve issues, and enjoys working collaboratively.
Interested in ongoing opportunities and career growth with our startup.
If you enjoy using Freelance Autopilot, we would really appreciate your testimonial. Send it in here.
To stop receiving notifications. Go to https://freelanceautopilot.com/searches and disable or delete your tracked searches.
We are seeking a skilled mechanical designer to assist with a robotics design project. The ideal candidate will have experience in creating innovative designs, prototyping, and working with various robotics applications. Attention to detail and creativity are essential. If you are passionate about robotics and mechanical design, we would love to hear from you!
If you enjoy using Freelance Autopilot, we would really appreciate your testimonial. Send it in here.
To stop receiving notifications. Go to https://freelanceautopilot.com/searches and disable or delete your tracked searches.
About USA Restaurant Suppliers
We sell commercial kitchen equipment nationwide through a custom‑built Shopify storefront. After a recent overhaul of 20 k product descriptions, we’re seeing traction, but rankings, Core Web Vitals, and Google Merchant Center feeds still need help. That’s where you come in.
What You’ll Tackle
Enforce universal HTTPS across all URLs with 301 redirects and correct canonical tags
Audit and optimize our sitemap, robots.txt, and crawl‑budget settings (Search Console)
Diagnose and improve Core Web Vitals on desktop and mobile, targeting LCP below 2.5 s
Clean up Liquid theme bloat: image compression, script deferral, unused app removal
Validate and repair JSON‑LD Product schema so all SKUs qualify for rich results
Expand and sanitize our Google Merchant Center feed (from 100 to 20000 SKUs)
Bulk update the top 250 product titles and meta descriptions via CSV/API for higher CTR
Produce a before/after report with metrics, a change log, and SOPs for in‑house staff
Must‑Have Skills & Tools
3+ years hands‑on Technical SEO for Shopify or other SaaS e‑commerce platforms
Strong grasp of Liquid, HTML, CSS, and basic JS for performance tweaks
Expert with Search Console, Lighthouse/PageSpeed Insights, Screaming Frog, and Ahrefs/SEMrush
Proven track record of fixing HTTP/HTTPS cannibalization and boosting Core Web Vitals
Deep experience with Google Merchant Center feeds, diagnostics, and policy compliance
Comfortable with bulk CSV uploads, Shopify API, and version‑controlled theme edits (Git)
Nice‑to‑Have
Familiarity with restaurant equipment or B2B catalogues
Experience setting up hreflang and multi‑currency feeds (CA/MX)
Basic SQL or BigQuery skills for log‑file analysis
What Success Looks Like
Average desktop LCP below 2.5 s, CLS below 0.1, FID below 100 ms
All HTTP variants 301 to HTTPS with a single canonical for each page
Merchant Center diagnostics: less than 1 % disapproved items; 20000 active SKUs
Average search position for tracked keywords improves from 35 to sub‑20 in 60 days
Internal team trained and equipped with SOPs to maintain gains
How to Apply
Send a short intro, a link to your portfolio or case studies, and the answers to:
Describe a Shopify project where you reduced LCP below 2.5 s—what steps did you take?
Detail a Merchant Center disapproval you resolved. How long did it take?
If we gave you one SKU URL today, what are the first three fixes you’d make?
If you enjoy using Freelance Autopilot, we would really appreciate your testimonial. Send it in here.
To stop receiving notifications. Go to https://freelanceautopilot.com/searches and disable or delete your tracked searches.
This role is for a Climate Tech startup, Series A.
We’re hiring senior/lead/ staff software engineers for a 3-month, full-time contract to pair program remotely with our long-time solutions engineers. These team members bring deep domain knowledge and you’ll bring top-tier software engineering craftsmanship to help accelerate development and build resilient, extensible systems.
This is a highly collaborative, hands-on coding role. You’ll join our Delivery Platform team to:
- Build and maintain a robust, testable platform that continually delivers working software.
- Pair program daily with our internal engineers to share best practices and mentor along the way.
- Shape feature development by working closely with Product and Design.
- Lead technical design of data ingestion, transformation, and publishing pipelines.
- Drive alignment between team execution and our broader vegetation intelligence goals.
Our Tech Stack
- Python – core application and data pipelines
- Dagster – job orchestration
- GCP + Kubernetes – cloud infrastructure and service orchestration
- Typescript + React + Deck.gl – for building dynamic, map-based frontends
You Bring
- Strong foundation in Extreme Programming values like TDD, pair programming, and iterative development.
- Great Python experience, ideally in data-heavy or ML-focused environments.
- Comfort working with data ingestion, APIs, and event-streaming systems.
- Knowledge of or interest in satellite data and geospatial processing.
- A collaborative mindset, and experience fostering inclusive, psychologically safe environments.
- Experience coaching others and elevating engineering practices.
What You'll Work On
You’ll help us fortify our data processing engine: we are ingesting satellite imagery, cleaning and harmonizing datasets, coordinating streaming data flows, and building a unified model of how data and domain knowledge intersect. This includes knowledge captured from utility partners, arborists, and internal SMEs. Your goal: help us build the single source of truth for vegetation intelligence.
Ways of Working
- Pair programming – daily collaboration with peers
- Test-Driven Development – build confidence in every line of code
- Weekly iterations – move fast without breaking trust
- Fully remote – but tightly aligned on West Coast time
If you enjoy using Freelance Autopilot, we would really appreciate your testimonial. Send it in here.
To stop receiving notifications. Go to https://freelanceautopilot.com/searches and disable or delete your tracked searches.
We’re building a micro-to-medium consulting bench for M86Global, a US based data strategy and analytics firm.
We're not hiring full-time roles. Instead, we’re assembling a flexible network of trusted experts we can bring in for scoped, short- to medium-term projects as client engagements ramp up — expected later this year.
We're especially interested in collaborators with experience in:
dbt / modern data stack
Python for ETL / transformation
Snowflake, Redshift, Postgres
Finance or healthcare data (bonus)
This is a project based collaboration model, ideal for senior freelancers, fractional consultants, or builders who prefer async, specification driven execution.
If we’re aligned, we’ll loop you in as needed and, with your consent, include you in our M86Global delivery bench.
If you enjoy using Freelance Autopilot, we would really appreciate your testimonial. Send it in here.
To stop receiving notifications. Go to https://freelanceautopilot.com/searches and disable or delete your tracked searches.
We’re building a micro-to-medium consulting bench for M86Global, a US based data strategy and analytics firm.
We're not hiring full-time roles. Instead, we’re assembling a flexible network of trusted experts we can bring in for scoped, short- to medium-term projects as client engagements ramp up — expected later this year.
We're especially interested in collaborators with experience in:
dbt / modern data stack
Python for ETL / transformation
Snowflake, Redshift, Postgres
Finance or healthcare data (bonus)
This is a project based collaboration model, ideal for senior freelancers, fractional consultants, or builders who prefer async, specification driven execution.
If we’re aligned, we’ll loop you in as needed and, with your consent, include you in our M86Global delivery bench.
If you enjoy using Freelance Autopilot, we would really appreciate your testimonial. Send it in here.
To stop receiving notifications. Go to https://freelanceautopilot.com/searches and disable or delete your tracked searches.
I'm the creator of an original supernatural action-drama series titled Reaper, currently preparing to launch Book 1 as a webtoon, physical book, and simultaneously develop a 1-minute animated teaser that will serve as:
- Promotional content for the Reaper webtoon launch (Book 1)
- Pitch material for potential anime adaptation and investor outreach
This teaser will introduce the emotional core, characters, supernatural world, and dark tone of Reaper with anime-level intensity and bold visuals.
Project Scope:
- 1-minute 2D animated teaser (cinematic pacing)
- Voiceover-driven narration/dialogue
- 7 characters featured (including dynamic action and emotional reaction shots)
- Stylized supernatural fight animation, glitch/FX work, and strong mood.
- 1-2 establishing landscape shots to introduce the world, specifically the continent of Nexoria.
What I Need:
- A solo animator or small studio capable of full-service production or collaboration (storyboard to final composite)
- Expertise in anime-style or cinematic 2D animation
-Ability to express emotion through subtle facial/body language
-Experience animating fight scenes, supernatural effects, or stylized combat a major plus.
-Communication and creative collaboration are important!
I Will Provide:
-A complete script draft
- Character references and tone/moodboard
- Voiceover files (or temp VO)
-Direction for mood, pacing, shot types, etc.
Budget:
$2,000-$2,500, fixed price.
Goal:
-Use a teaser to promote Webtoon Book 1
-Build buzz for fans and potential sponsors
-Use as part of a pitch deck to secure funding or partnership for anime development
If you enjoy using Freelance Autopilot, we would really appreciate your testimonial. Send it in here.
To stop receiving notifications. Go to https://freelanceautopilot.com/searches and disable or delete your tracked searches.