Intelligent Proof Verification
One of the most powerful AI features on Haceme el Favor is the Intelligent Proof Verification system. Every time a favorecedor uploads photos or videos as proof of task completion, the AI analyzes them across five critical dimensions to determine whether the favor was truly completed as described.
How It Works
When a favorecedor marks a favor as completed and uploads their proof (photos, videos, or both), the AI verification pipeline activates. The system examines the evidence through five distinct analysis dimensions, then produces a single confidence score that determines what happens next.
The 5 Analysis Dimensions
Each uploaded proof is evaluated across five dimensions simultaneously:
1. Relevance
The AI compares the content of the photo or video against the original task description. For example, if the favor was to pick up a package from a pharmacy, the AI checks whether the image shows a package, a pharmacy setting, or relevant items. Images that show unrelated content receive a low relevance score.
2. Completeness
Some favors require multiple proof items — for example, a photo of the receipt and a photo of the delivered item. The AI checks whether all expected evidence has been submitted. If the task description implies multiple steps, the system verifies that each step is documented.
3. GPS Accuracy
Every photo and video uploaded through the HEF app includes GPS metadata. The AI compares this location data against the target area specified in the favor. It also checks for GPS spoofing indicators — inconsistent metadata patterns that might suggest the location data has been falsified.
4. Timestamp Verification
The AI verifies that the photo or video was taken during the active task window — after the favor was accepted and before completion was submitted. Images taken significantly before or after the expected time frame are flagged for review.
5. Quality Check
Blurry, dark, or obstructed images reduce the value of proof. The AI evaluates image clarity, lighting conditions, and whether key details are visible. If the image quality is too low to verify the task, it affects the overall confidence score.
Confidence Scoring
After analyzing all five dimensions, the system produces a single confidence percentage from 0 to 100:
- 90 — 100% (High Confidence): The proof clearly matches the task, all evidence is present, GPS and timestamps are consistent, and image quality is good. These submissions are auto-approved and payment processing begins immediately.
- 70 — 89% (Medium Confidence): Most dimensions pass, but one or more have minor concerns — perhaps the GPS is slightly off, or one proof image is unclear. These are flagged for manual review by the HEF team or the solicitante.
- Below 70% (Low Confidence): Significant issues detected. The favorecedor is notified and may need to upload additional or replacement proof. This does not automatically mean the task was not completed — it means the evidence provided is not sufficient for verification.
What the AI Checks For
Beyond the five main dimensions, the AI also looks for specific verification signals:
- Location landmarks: Recognizable buildings, street signs, or geographic features that confirm the correct location.
- Relevant items: Shopping bags, documents, packages, or other objects described in the task.
- GPS metadata integrity: Patterns that indicate genuine location data versus spoofed coordinates.
- Image manipulation: Signs that a photo has been digitally altered, spliced, or generated artificially.
What Happens If Verification Fails
If the confidence score falls below the threshold, the favorecedor has several options:
- Resubmit proof — Upload new, clearer photos or videos that better document the completed task. The AI will re-analyze the new submission.
- Add supplementary evidence — Provide additional photos from different angles, or a video walkthrough showing the completed task.
- Contact the solicitante — Use in-app chat to discuss the issue directly. The requester can manually approve the task if they are satisfied.
- Escalate to support — If the favorecedor believes the AI assessment is incorrect, they can open a dispute for human review.