Technology & AI

Intelligent Proof Verification

How AI analyzes photos, videos, and GPS data to verify favors.

Intelligent Proof Verification

One of the most powerful AI features on Haceme el Favor is the Intelligent Proof Verification system. Every time a favorecedor uploads photos or videos as proof of task completion, the AI analyzes them across five critical dimensions to determine whether the favor was truly completed as described.

Automatic and Instant. Proof verification happens automatically the moment a provider uploads their evidence. There is no waiting period — the AI processes the submission in seconds and provides a confidence score immediately.

How It Works

When a favorecedor marks a favor as completed and uploads their proof (photos, videos, or both), the AI verification pipeline activates. The system examines the evidence through five distinct analysis dimensions, then produces a single confidence score that determines what happens next.

PHOTO / VIDEO Upload by Provider AI ANALYSIS ENGINE Processes 5 dimensions simultaneously Generates confidence score 0-100% Processing time: seconds 1 Relevance Does the photo match the task description? 2 Completeness Are all required proofs present? 3 GPS Accuracy Does GPS match the target location? 4 Timestamp Was it taken during the task window? 5 Quality Is the image clear and well-lit? CONFIDENCE SCORE Weighted average: 0% — 100% 90 — 100% AUTO-APPROVED High confidence 70 — 89% MANUAL REVIEW Flagged for human check Below 70% RESUBMIT Provider may need new proof

The 5 Analysis Dimensions

Each uploaded proof is evaluated across five dimensions simultaneously:

1. Relevance

The AI compares the content of the photo or video against the original task description. For example, if the favor was to pick up a package from a pharmacy, the AI checks whether the image shows a package, a pharmacy setting, or relevant items. Images that show unrelated content receive a low relevance score.

2. Completeness

Some favors require multiple proof items — for example, a photo of the receipt and a photo of the delivered item. The AI checks whether all expected evidence has been submitted. If the task description implies multiple steps, the system verifies that each step is documented.

3. GPS Accuracy

Every photo and video uploaded through the HEF app includes GPS metadata. The AI compares this location data against the target area specified in the favor. It also checks for GPS spoofing indicators — inconsistent metadata patterns that might suggest the location data has been falsified.

4. Timestamp Verification

The AI verifies that the photo or video was taken during the active task window — after the favor was accepted and before completion was submitted. Images taken significantly before or after the expected time frame are flagged for review.

5. Quality Check

Blurry, dark, or obstructed images reduce the value of proof. The AI evaluates image clarity, lighting conditions, and whether key details are visible. If the image quality is too low to verify the task, it affects the overall confidence score.

Confidence Scoring

After analyzing all five dimensions, the system produces a single confidence percentage from 0 to 100:

  • 90 — 100% (High Confidence): The proof clearly matches the task, all evidence is present, GPS and timestamps are consistent, and image quality is good. These submissions are auto-approved and payment processing begins immediately.
  • 70 — 89% (Medium Confidence): Most dimensions pass, but one or more have minor concerns — perhaps the GPS is slightly off, or one proof image is unclear. These are flagged for manual review by the HEF team or the solicitante.
  • Below 70% (Low Confidence): Significant issues detected. The favorecedor is notified and may need to upload additional or replacement proof. This does not automatically mean the task was not completed — it means the evidence provided is not sufficient for verification.

What the AI Checks For

Beyond the five main dimensions, the AI also looks for specific verification signals:

  • Location landmarks: Recognizable buildings, street signs, or geographic features that confirm the correct location.
  • Relevant items: Shopping bags, documents, packages, or other objects described in the task.
  • GPS metadata integrity: Patterns that indicate genuine location data versus spoofed coordinates.
  • Image manipulation: Signs that a photo has been digitally altered, spliced, or generated artificially.

What Happens If Verification Fails

If the confidence score falls below the threshold, the favorecedor has several options:

  1. Resubmit proof — Upload new, clearer photos or videos that better document the completed task. The AI will re-analyze the new submission.
  2. Add supplementary evidence — Provide additional photos from different angles, or a video walkthrough showing the completed task.
  3. Contact the solicitante — Use in-app chat to discuss the issue directly. The requester can manually approve the task if they are satisfied.
  4. Escalate to support — If the favorecedor believes the AI assessment is incorrect, they can open a dispute for human review.
GPS Must Be Enabled. For proof verification to work properly, location services must be enabled on the provider's device when taking photos and videos. Proof submitted without GPS metadata will always require manual review.
Tips for Providers. To get the highest confidence scores: take photos in good lighting, include relevant items clearly in the frame, make sure GPS is enabled, and capture proof from the location specified in the favor. The better your proof, the faster your payment.

Did this article help you? If not, contact our team.

Contact Support