LyraLog9 摄像机系统

APlayerCameraManager

PlayerControl 配置 APlayerCameraManager

AEqZeroPlayerController::AEqZeroPlayerController(const FObjectInitializer& ObjectInitializer)
    : Super(ObjectInitializer)
{
    PlayerCameraManagerClass = AEqZeroPlayerCameraManager::StaticClass();
ALyraPlayerCameraManager::ALyraPlayerCameraManager(const FObjectInitializer& ObjectInitializer)
    : Super(ObjectInitializer)
{
    DefaultFOV = LYRA_CAMERA_DEFAULT_FOV;
    ViewPitchMin = LYRA_CAMERA_DEFAULT_PITCH_MIN;
    ViewPitchMax = LYRA_CAMERA_DEFAULT_PITCH_MAX;

    UICamera = CreateDefaultSubobject<ULyraUICameraManagerComponent>(UICameraComponentName);
}

构造函数配置一些数值

UICamera是在有UI的情况下,可以定制一些摄像机特性,但是Lyra项目没有实现

原项目是这样的,有需 UI,需要的时候通过 UI camera 的函数修改输出

void ALyraPlayerCameraManager::UpdateViewTarget(FTViewTarget& OutVT, float DeltaTime)
{
    // If the UI Camera is looking at something, let it have priority.
    if (UICamera->NeedsToUpdateViewTarget())
    {
        Super::UpdateViewTarget(OutVT, DeltaTime);
        UICamera->UpdateViewTarget(OutVT, DeltaTime);
        return;
    }

    Super::UpdateViewTarget(OutVT, DeltaTime);
}

例如我的另一个项目,通过 CustomCameraBehavior 修改摄像机的 OutLocation, OutRotation, OutFOV

再走 OutVT 返回

/*
 * Run On Server
 */
void ACwlBaseCameraManager::UpdateViewTargetInternal(FTViewTarget& OutVT, float DeltaTime)
{
    Super::UpdateViewTargetInternal(OutVT, DeltaTime);

    if (OutVT.Target)
    {
        if (OutVT.Target->IsA<ACwlBasePlayer>())
        {
            FVector OutLocation;
            FRotator OutRotation;
            float OutFOV;

            if (CustomCameraBehavior(DeltaTime, OutLocation, OutRotation, OutFOV))
            {
                OutVT.POV.Location = OutLocation;
                OutVT.POV.Rotation = OutRotation;
                OutVT.POV.FOV = OutFOV;
            }
            else
            {
                OutVT.Target->CalcCamera(DeltaTime, OutVT.POV);
            }
        }
        else
        {
            OutVT.Target->CalcCamera(DeltaTime, OutVT.POV);
        }
    }
}

由于暂时没有 UI 摄像机的计划,所以这个函数直接空着

void AEqZeroPlayerCameraManager::UpdateViewTarget(FTViewTarget& OutVT, float DeltaTime)
{
    Super::UpdateViewTarget(OutVT, DeltaTime);
}

还有一个地方能直接返回摄像机的坐标和旋转

CameraComponent 的 GetCameraView

void APlayerCameraManager::UpdateViewTarget(FTViewTarget& OutVT, float DeltaTime)
{
    // ...

    if (ACameraActor* CamActor = Cast<ACameraActor>(OutVT.Target))
    {
        // Viewing through a camera actor.
        CamActor->GetCameraComponent()->GetCameraView(DeltaTime, OutVT.POV);
    }
    else
    {
        // ...
    }
    // ...
}

还差一个 DisplayDebug 我们写完在补充。

为什么用这套摄像机而不是弹簧臂呢?

  • 缺少状态驱动,只能改旋转,拉进拉远,偏移。如果能自己维护一套也行。如何与技能混合?
  • SpringArm 的参数变化(TargetArmLength、SocketOffset、Rotation)通常是瞬间修改或简单插值。
  • SpringArm 是纯客户端的,

show debug camera 可以打开摄像机调试

这个摄像机模式也是从pawn data 点过来,自己找哪个文件夹找不到的

/Game/Characters/Cameras/CM_ThirdPerson.CM_ThirdPerson

可以看到第三人称和死亡的模式,还有两条曲线

蓝图父类是这个 ULyraCameraMode_ThirdPerson 继承 ULyraCameraMode

shooter core 里面有个 /ShooterCore/Camera/CM_ThirdPersonADS.CM_ThirdPersonADS 开镜的模式

=》一下内容从初稿copy过来的,后面应该还有一个最有一稿

ALyraPlayerCameraManager

继承APlayerCameraManager,要在 player controller 的构造写一个default class,大家应该都知道

ALyraPlayerController::ALyraPlayerController(const FObjectInitializer& ObjectInitializer)
    : Super(ObjectInitializer)
{
    PlayerCameraManagerClass = ALyraPlayerCameraManager::StaticClass();
}
UCLASS(notplaceable, MinimalAPI)
class ALyraPlayerCameraManager : public APlayerCameraManager
{
    GENERATED_BODY()

public:

    ALyraPlayerCameraManager(const FObjectInitializer& ObjectInitializer);

    ULyraUICameraManagerComponent* GetUICameraComponent() const;

protected:

    virtual void UpdateViewTarget(FTViewTarget& OutVT, float DeltaTime) override;

    virtual void DisplayDebug(UCanvas* Canvas, const FDebugDisplayInfo& DebugDisplay, float& YL, float& YPos) override;

private:
    /** UI 相机组件在 UI 执行重要操作(此时游戏玩法不具有优先性)时控制相机。 */
    // UActorComponent
    UPROPERTY(Transient)
    TObjectPtr<ULyraUICameraManagerComponent> UICamera;
};

构造函数是一些摄像机属性配置

ALyraPlayerCameraManager::UpdateViewTarget 返回的坐标旋转可以直接影响摄像机,

ALyraPlayerCameraManager::DisplayDebug 这个是调试用的,我们先一会再看

看属性,这是一个 actor compont 这是给 UI 用的,但是这里只写了,没有用上吧

ULyraUICameraManagerComponent

其实没用到,就是某种情况下你UI要覆盖摄像机行为的时候,可以这里加代码

ULyraCameraMode

两个摄像机位置是如何混合的,线性,还是有一些混合。

具体一会点进去看下公式

/**
 * ELyraCameraModeBlendFunction
 *
 *  Blend function used for transitioning between camera modes.
 */
UENUM(BlueprintType)
enum class ELyraCameraModeBlendFunction : uint8
{
    // Does a simple linear interpolation. 
    Linear,

    // Immediately accelerates, but smoothly decelerates into the target.  Ease amount controlled by the exponent.
    EaseIn,

    // Smoothly accelerates, but does not decelerate into the target.  Ease amount controlled by the exponent.
    EaseOut,

    // Smoothly accelerates and decelerates.  Ease amount controlled by the exponent.
    EaseInOut,

    COUNT   UMETA(Hidden)
};

摄像机的数据

/**
 * FLyraCameraModeView
 *
 *  View data produced by the camera mode that is used to blend camera modes.
 */
struct FLyraCameraModeView
{
public:

    FLyraCameraModeView();

    void Blend(const FLyraCameraModeView& Other, float OtherWeight);

public:

    FVector Location;
    FRotator Rotation;
    FRotator ControlRotation;
    float FieldOfView;
};

先看数据

UCLASS(MinimalAPI, Abstract, NotBlueprintable)
class ULyraCameraMode : public UObject
{
    GENERATED_BODY()

    // ...

protected:
    // A tag that can be queried by gameplay code that cares when a kind of camera mode is active
    // without having to ask about a specific mode (e.g., when aiming downsights to get more accuracy)
    UPROPERTY(EditDefaultsOnly, Category = "Blending")
    FGameplayTag CameraTypeTag;

    // 摄像机要混合成什么样子的最终摄像机数据(location rotation ...)
    FLyraCameraModeView View;

    // FOV (in degrees).
    UPROPERTY(EditDefaultsOnly, Category = "View", Meta = (UIMin = "5.0", UIMax = "170", ClampMin = "5.0", ClampMax = "170.0"))
    float FieldOfView;

    // 最小 pitch
    UPROPERTY(EditDefaultsOnly, Category = "View", Meta = (UIMin = "-89.9", UIMax = "89.9", ClampMin = "-89.9", ClampMax = "89.9"))
    float ViewPitchMin;

    //最大 pitch
    UPROPERTY(EditDefaultsOnly, Category = "View", Meta = (UIMin = "-89.9", UIMax = "89.9", ClampMin = "-89.9", ClampMax = "89.9"))
    float ViewPitchMax;

    // 混合时间
    UPROPERTY(EditDefaultsOnly, Category = "Blending")
    float BlendTime;

    // 混合回调
    UPROPERTY(EditDefaultsOnly, Category = "Blending")
    ELyraCameraModeBlendFunction BlendFunction;

    // Exponent used by blend functions to control the shape of the curve.
    UPROPERTY(EditDefaultsOnly, Category = "Blending")
    float BlendExponent;

    // 线性混合的一个权重,具体看代码
    float BlendAlpha;

    // BlendAlpha 计算出来的一个权重
    float BlendWeight;

protected:
    /** If true, 跳过所有插值运算,并将相机置于理想位置. set false next frame. */
    UPROPERTY(transient)
    uint32 bResetInterpolation:1;
};

看看比较长的函数

FVector ULyraCameraMode::GetPivotLocation() const
{
    const AActor* TargetActor = GetTargetActor();
    check(TargetActor);

    if (const APawn* TargetPawn = Cast<APawn>(TargetActor))
    {
        // Height adjustments for characters to account for crouching.
        if (const ACharacter* TargetCharacter = Cast<ACharacter>(TargetPawn))
        {
            const ACharacter* TargetCharacterCDO = TargetCharacter->GetClass()->GetDefaultObject<ACharacter>();
            const UCapsuleComponent* CapsuleComp = TargetCharacter->GetCapsuleComponent();
            const UCapsuleComponent* CapsuleCompCDO = TargetCharacterCDO->GetCapsuleComponent();

            const float DefaultHalfHeight = CapsuleCompCDO->GetUnscaledCapsuleHalfHeight();
            const float ActualHalfHeight = CapsuleComp->GetUnscaledCapsuleHalfHeight();
            const float HeightAdjustment = (DefaultHalfHeight - ActualHalfHeight) + TargetCharacterCDO->BaseEyeHeight;

            // 避免蹲下的时候,摄像机突然变化。
            // 蹲伏 ActualHalfHeight 从 DefaultHalfHeight 变小,所以 HeightAdjustment 是正数。调整回来
            // 缓慢过度在其他地方处理
            return TargetCharacter->GetActorLocation() + (FVector::UpVector * HeightAdjustment);
        }

        // pawn的接口 Returns Pawn's eye location
        return TargetPawn->GetPawnViewLocation();
    }

    return TargetActor->GetActorLocation();
}

ULyraCameraMode::GetPivotRotation()

这个都是pawn的接口

void ULyraCameraMode::UpdateCameraMode(float DeltaTime)
{
    UpdateView(DeltaTime);
    UpdateBlending(DeltaTime);
}

UpdateCameraMode 这个由摄像机栈调用

void ULyraCameraMode::UpdateView(float DeltaTime)
{
    // 角色眼睛的摄像机位置,前面提过。总之就是一个loc和rot
    FVector PivotLocation = GetPivotLocation();
    FRotator PivotRotation = GetPivotRotation();

    // 限制一下 pitch
    PivotRotation.Pitch = FMath::ClampAngle(PivotRotation.Pitch, ViewPitchMin, ViewPitchMax);

    View.Location = PivotLocation;
    View.Rotation = PivotRotation;
    View.ControlRotation = View.Rotation;
    View.FieldOfView = FieldOfView;
}

直接设置摄像机权重,switch里面点进去就是运算过程的不一样

void ULyraCameraMode::SetBlendWeight(float Weight)
{
    BlendWeight = FMath::Clamp(Weight, 0.0f, 1.0f);

    // Since we're setting the blend weight directly, we need to calculate the blend alpha to account for the blend function.
    const float InvExponent = (BlendExponent > 0.0f) ? (1.0f / BlendExponent) : 1.0f;

    switch (BlendFunction)
    {
    case ELyraCameraModeBlendFunction::Linear:
        BlendAlpha = BlendWeight;
        break;

    // 慢起步→快收尾 延迟触发,后期快速切换
    // 需要延迟响应的,如加速起步、瞄准初期
    case ELyraCameraModeBlendFunction::EaseIn:
        BlendAlpha = FMath::InterpEaseIn(0.0f, 1.0f, BlendWeight, InvExponent);
        break;

    // 快起步→慢收尾 快速切换再慢慢过度,响应快,收尾顺滑,体感最自然
    // 大多数情况,瞄准、蹲伏、跳跃 / 落地相机
    case ELyraCameraModeBlendFunction::EaseOut:
        BlendAlpha = FMath::InterpEaseOut(0.0f, 1.0f, BlendWeight, InvExponent);
        break;

    // 慢起步→中速→慢收尾(S 型曲线)超丝滑,无任何突兀感
    // 长时、高顺滑度要求的过渡(如走路→跑步、载具行驶、慢动作相机)
    case ELyraCameraModeBlendFunction::EaseInOut:
        BlendAlpha = FMath::InterpEaseInOut(0.0f, 1.0f, BlendWeight, InvExponent);
        break;

    default:
        checkf(false, TEXT("SetBlendWeight: Invalid BlendFunction [%d]\n"), (uint8)BlendFunction);
        break;
    }
}

前面是强改,这里就是在相当于在 tick 中慢慢更新

void ULyraCameraMode::UpdateBlending(float DeltaTime)
{
    if (BlendTime > 0.0f)
    {
        // 累计混合度,最高是1
        BlendAlpha += (DeltaTime / BlendTime);
        BlendAlpha = FMath::Min(BlendAlpha, 1.0f);
    }
    else
    {
        // 混合时间小,说明要立马混合
        BlendAlpha = 1.0f;
    }

    // 单词翻译是指数
    const float Exponent = (BlendExponent > 0.0f) ? BlendExponent : 1.0f;
    switch (BlendFunction)
    {
        // ...
    }
}

ULyraCameraModeStack

只是一个容器。Ubject子类

数据是

    bool bIsActive;

    UPROPERTY()
    TArray<TObjectPtr<ULyraCameraMode>> CameraModeInstances;

    UPROPERTY()
    TArray<TObjectPtr<ULyraCameraMode>> CameraModeStack;

使用的时候就维护这两个容器

怎么维护的还不知道。

默认激活,激活和不激活的时候会便利容器,把所有mode回调一下,但是这里没有具体业务

void ULyraCameraModeStack::ActivateStack()
{
    if (!bIsActive)
    {
        bIsActive = true;

        // Notify camera modes that they are being activated.
        for (ULyraCameraMode* CameraMode : CameraModeStack)
        {
            check(CameraMode);
            CameraMode->OnActivation();
        }
    }
}

void ULyraCameraModeStack::DeactivateStack()
{
    if (bIsActive)
    {
        bIsActive = false;

        // Notify camera modes that they are being deactivated.
        for (ULyraCameraMode* CameraMode : CameraModeStack)
        {
            check(CameraMode);
            CameraMode->OnDeactivation();
        }
    }
}

PushCameraMode

长函数了


void ULyraCameraModeStack::PushCameraMode(TSubclassOf<ULyraCameraMode> CameraModeClass)
{
    if (!CameraModeClass)
    {
        return;
    }

    // 通过class
    ULyraCameraMode* CameraMode = GetCameraModeInstance(CameraModeClass);
    check(CameraMode);

    // ...
}

开头就是通过 class 创建一个 mode 的实例

比较清楚的看到 CameraModeInstances 是已经创建过实例的缓存。避免重复创建的作用。

ULyraCameraMode* ULyraCameraModeStack::GetCameraModeInstance(TSubclassOf<ULyraCameraMode> CameraModeClass)
{
    check(CameraModeClass);

    // First see if we already created one.
    for (ULyraCameraMode* CameraMode : CameraModeInstances)
    {
        if ((CameraMode != nullptr) && (CameraMode->GetClass() == CameraModeClass))
        {
            return CameraMode;
        }
    }

    // Not found, so we need to create it.
    ULyraCameraMode* NewCameraMode = NewObject<ULyraCameraMode>(GetOuter(), CameraModeClass, NAME_None, RF_NoFlags);
    check(NewCameraMode);

    CameraModeInstances.Add(NewCameraMode);

    return NewCameraMode;
}

void ULyraCameraModeStack::PushCameraMode(TSubclassOf<ULyraCameraMode> CameraModeClass)
{
    if (!CameraModeClass)
    {
        return;
    }

    // 通过class 创建或者获取已经创建的 ULyraCameraMode
    ULyraCameraMode* CameraMode = GetCameraModeInstance(CameraModeClass);
    check(CameraMode);

    // 如果 CameraModeStack[0] 已经是了,没必要push
    int32 StackSize = CameraModeStack.Num();
    if ((StackSize > 0) && (CameraModeStack[0] == CameraMode))
    {
        // Already top of stack.
        return;
    }

    // See if it's already in the stack and remove it.
    // Figure out how much it was contributing to the stack.
    // 如果不是[0]位置,就删除和计算贡献
    int32 ExistingStackIndex = INDEX_NONE;
    float ExistingStackContribution = 1.0f;

    for (int32 StackIndex = 0; StackIndex < StackSize; ++StackIndex)
    {
        if (CameraModeStack[StackIndex] == CameraMode)
        {
            ExistingStackIndex = StackIndex;
            ExistingStackContribution *= CameraMode->GetBlendWeight();
            break;
        }
        else
        {
            ExistingStackContribution *= (1.0f - CameraModeStack[StackIndex]->GetBlendWeight());
        }
    }

    // 找到了,要删除,贡献算过了
    if (ExistingStackIndex != INDEX_NONE)
    {
        CameraModeStack.RemoveAt(ExistingStackIndex);
        StackSize--;
    }
    else
    {
        // 没找到共享就是0
        ExistingStackContribution = 0.0f;
    }
}

如果不是[0]位置,就删除和计算贡献,后面有一个insert 0,合起来就是提到0位置的意思

这个贡献什么意思呢?blend weight 还不知道具体起作用的地方,所以权重什么意思也还不知道,保留疑问看下去。

void ULyraCameraModeStack::PushCameraMode(TSubclassOf<ULyraCameraMode> CameraModeClass)
{
    // 。。。

    // Decide what initial weight to start with.
    // 决定从什么 初始权重 开始。
    const bool bShouldBlend = ((CameraMode->GetBlendTime() > 0.0f) && (StackSize > 0));
    const float BlendWeight = (bShouldBlend ? ExistingStackContribution : 1.0f);

    // 覆盖权重
    CameraMode->SetBlendWeight(BlendWeight);

    // 插入 [0] 位置
    CameraModeStack.Insert(CameraMode, 0);

    // 保证最后一个的权重是 100%
    CameraModeStack.Last()->SetBlendWeight(1.0f);

    // 如果这次添加的是新的,就调用一下激活
    if (ExistingStackIndex == INDEX_NONE)
    {
        CameraMode->OnActivation();
    }
}

最后注意这个是 ULyraCameraComponent 调用过来的。

EvaluateStack

bool ULyraCameraModeStack::EvaluateStack(float DeltaTime, FLyraCameraModeView& OutCameraModeView)
{
    if (!bIsActive)
    {
        return false;
    }

    UpdateStack(DeltaTime);
    BlendStack(OutCameraModeView);

    return true;
}
  • UpdateStack
void ULyraCameraModeStack::UpdateStack(float DeltaTime)
{
    const int32 StackSize = CameraModeStack.Num();
    if (StackSize <= 0)
    {
        return;
    }

    int32 RemoveCount = 0;
    int32 RemoveIndex = INDEX_NONE;

    // 遍历 ULyraCameraMode
    for (int32 StackIndex = 0; StackIndex < StackSize; ++StackIndex)
    {
        ULyraCameraMode* CameraMode = CameraModeStack[StackIndex];
        check(CameraMode);

        // 维护每一个camera的 摄像机数据,loc rot pitch 等
        // 重要的是计算混合权重
        CameraMode->UpdateCameraMode(DeltaTime);

        if (CameraMode->GetBlendWeight() >= 1.0f)
        {
            // Everything below this mode is now irrelevant and can be removed.
            // 此模式下方的所有内容现在都不相关了,可以删除。
            RemoveIndex = (StackIndex + 1);
            RemoveCount = (StackSize - RemoveIndex);
            break;
        }
    }

    // 后面的都不相关了,删除,并取消激活
    if (RemoveCount > 0)
    {
        // Let the camera modes know they being removed from the stack.
        for (int32 StackIndex = RemoveIndex; StackIndex < StackSize; ++StackIndex)
        {
            ULyraCameraMode* CameraMode = CameraModeStack[StackIndex];
            check(CameraMode);

            CameraMode->OnDeactivation();
        }

        CameraModeStack.RemoveAt(RemoveIndex, RemoveCount);
    }
}
void ULyraCameraModeStack::BlendStack(FLyraCameraModeView& OutCameraModeView) const
{
    const int32 StackSize = CameraModeStack.Num();
    if (StackSize <= 0)
    {
        return;
    }

    // Start at the bottom and blend up the stack

    // 拿到数组最后一个camera mode,拿到view里面是摄像机的loc,rot,
    const ULyraCameraMode* CameraMode = CameraModeStack[StackSize - 1];
    check(CameraMode);
    OutCameraModeView = CameraMode->GetCameraModeView();

    // 倒数第二个开始,倒着走
    for (int32 StackIndex = (StackSize - 2); StackIndex >= 0; --StackIndex)
    {
        CameraMode = CameraModeStack[StackIndex];
        check(CameraMode);

        // 调用blend
        OutCameraModeView.Blend(CameraMode->GetCameraModeView(), CameraMode->GetBlendWeight());
    }
}

ULyraCameraModeStack::GetBlendInfo 是获取最后一个mode的数据,包一下结构返回。

到这里camera mode 文件看完了。

我们知道了摄像机有一个栈。但是看起来不是数据结构上的栈。然后维护,blend。但是还没串联起来。???

还有一个类看完再回来。

ULyraCameraComponent

    // Stack used to blend the camera modes.
    UPROPERTY()
    TObjectPtr<ULyraCameraModeStack> CameraModeStack;

    // Offset applied to the field of view.  The offset is only for one frame, it gets cleared once it is applied.
    float FieldOfViewOffset;

数据就是这个栈

一个是 FieldOfViewOffset 这个 offset 干嘛的还不知道

GetBlendInfo 从上面的分析,就是栈最后一个数据

OnRegister 注册的时候new一下这个栈

  • GetCameraView

ALyraPlayerCameraManager::UpdateViewTarget 可以通过一个结构设置摄像机loc, rot

他的父类有一个

CamActor->GetCameraComponent()->GetCameraView(DeltaTime, OutVT.POV);

调用的就是摄像机组件的这个接口,那么核心就是 GetCameraView 返回坐标,直接影响摄像机位置。

void ULyraCameraComponent::GetCameraView(float DeltaTime, FMinimalViewInfo& DesiredView)
{
    check(CameraModeStack);

    UpdateCameraModes();

    // ... 这里会先更新一下mode
}
void ULyraCameraComponent::UpdateCameraModes()
{
    check(CameraModeStack);

    if (CameraModeStack->IsStackActivate())
    {
        if (DetermineCameraModeDelegate.IsBound())
        {
            if (const TSubclassOf<ULyraCameraMode> CameraMode = DetermineCameraModeDelegate.Execute())
            {
                CameraModeStack->PushCameraMode(CameraMode);
            }
        }
    }
}

调用代理,获取 CameraModeStack

这个代理是在hero comp 里面绑定的,哪里能够统筹获取到,技能和正常配置的摄像机mode的配置

然后push进来,如果摄像机模式没变,这个push就不会有效果,就和上面的逻辑连起来了。

void ULyraCameraComponent::GetCameraView(float DeltaTime, FMinimalViewInfo& DesiredView)
{
    check(CameraModeStack);

    UpdateCameraModes();

    // 目的是获得这个 FLyraCameraModeView,里面是摄像机的数据
    FLyraCameraModeView CameraModeView;
    CameraModeStack->EvaluateStack(DeltaTime, CameraModeView);

    // 怎么做的先不管,看看做完了干了啥

    // Keep player controller in sync with the latest view.
    // 设置给controller的rot
    if (APawn* TargetPawn = Cast<APawn>(GetTargetActor()))
    {
        if (APlayerController* PC = TargetPawn->GetController<APlayerController>())
        {
            PC->SetControlRotation(CameraModeView.ControlRotation);
        }
    }

    // Apply any offset that was added to the field of view.
    // 如果有FOV offset 叠加上去,但是代码好像没地方用到。
    CameraModeView.FieldOfView += FieldOfViewOffset;
    FieldOfViewOffset = 0.0f;

    // Keep camera component in sync with the latest view.
    // 设置摄像机
    SetWorldLocationAndRotation(CameraModeView.Location, CameraModeView.Rotation);
    FieldOfView = CameraModeView.FieldOfView;

    // Fill in desired view.
    // 注意 DesiredView 设置后就直接影响摄像机了
    DesiredView.Location = CameraModeView.Location;
    DesiredView.Rotation = CameraModeView.Rotation;
    DesiredView.FOV = CameraModeView.FieldOfView;
    DesiredView.OrthoWidth = OrthoWidth;
    DesiredView.OrthoNearClipPlane = OrthoNearClipPlane;
    DesiredView.OrthoFarClipPlane = OrthoFarClipPlane;
    DesiredView.AspectRatio = AspectRatio;
    DesiredView.bConstrainAspectRatio = bConstrainAspectRatio;
    DesiredView.bUseFieldOfViewForLOD = bUseFieldOfViewForLOD;
    DesiredView.ProjectionMode = ProjectionMode;

    // See if the CameraActor wants to override the PostProcess settings used.
    DesiredView.PostProcessBlendWeight = PostProcessBlendWeight;
    if (PostProcessBlendWeight > 0.0f)
    {
        DesiredView.PostProcessSettings = PostProcessSettings;
    }

    // XR 我们不管
}

我们知道了这个数据会影响摄像机的数值

然后我们倒着看看

    FLyraCameraModeView CameraModeView;
    CameraModeStack->EvaluateStack(DeltaTime, CameraModeView);

ULyraCameraModeStack::UpdateStack

void ULyraCameraModeStack::UpdateStack(float DeltaTime)
{
    const int32 StackSize = CameraModeStack.Num();
    if (StackSize <= 0)
    {
        return;
    }

    int32 RemoveCount = 0;
    int32 RemoveIndex = INDEX_NONE;

    // [mode1] [mode2] [mode3] 每一个更新一下,如果权重 >= 1 后面就全删除
    for (int32 StackIndex = 0; StackIndex < StackSize; ++StackIndex)
    {
        ULyraCameraMode* CameraMode = CameraModeStack[StackIndex];
        check(CameraMode);

        CameraMode->UpdateCameraMode(DeltaTime);

        if (CameraMode->GetBlendWeight() >= 1.0f)
        {
            // Everything below this mode is now irrelevant and can be removed.
            RemoveIndex = (StackIndex + 1);
            RemoveCount = (StackSize - RemoveIndex);
            break;
        }
    }

    if (RemoveCount > 0)
    {
        // Let the camera modes know they being removed from the stack.
        for (int32 StackIndex = RemoveIndex; StackIndex < StackSize; ++StackIndex)
        {
            ULyraCameraMode* CameraMode = CameraModeStack[StackIndex];
            check(CameraMode);

            CameraMode->OnDeactivation();
        }

        CameraModeStack.RemoveAt(RemoveIndex, RemoveCount);
    }
}
void ULyraCameraModeStack::BlendStack(FLyraCameraModeView& OutCameraModeView) const
{
    const int32 StackSize = CameraModeStack.Num();
    if (StackSize <= 0)
    {
        return;
    }

    // Start at the bottom and blend up the stack
    // 拿到最后一个 mode 的 view 返回,然后倒二之前的倒着遍历, Blend 一下
    const ULyraCameraMode* CameraMode = CameraModeStack[StackSize - 1];
    check(CameraMode);
    OutCameraModeView = CameraMode->GetCameraModeView();

    // 比如我们的Mode
    // 【第三人称mode】【技能A的mode】【技能B的mode】
    // 会直接拿技能B的mode,然后倒着和前面的做混合
    // 混合的过程包括一个权重,其他的就是loc,rot,control rot, fov 的lerp
    for (int32 StackIndex = (StackSize - 2); StackIndex >= 0; --StackIndex)
    {
        CameraMode = CameraModeStack[StackIndex];
        check(CameraMode);

        OutCameraModeView.Blend(CameraMode->GetCameraModeView(), CameraMode->GetBlendWeight());
    }
}

理解更新过程

流程:

有新的mode会push进来

GetCameraView的时候会更新栈

情况一:

假设我们第三人称,开镜技能的流程,这是一个tarray,后面是blend weight。第一个进来后是权重1,代表混合完成

【第三人称mode: 1】

开镜,新的贡献0,insert 0,所以变成了,按代码(PushCameraMode)

【开镜: 0】【第三人称: 1】

他会更新,根据曲线和混合时间(UpdateStack -> UpdateCameraMode -> 权重>=1的删除)

【开镜: 0.2】【第三人称: 1】

【开镜: 0.2】

然后第三人称就没了

BlendStack 由于只有一个,不走,直接返回过度了一半的 【开镜:0.2】

这时候如果我们不老实,快速且开镜和关

那么就会

【第三人称:0.2】【开镜:0.4】

返回开镜的0.4,也要混合第三人称的0.2

又不老实。又开镜

由于开镜还没混合完成,要挪到最开头。

【开镜:0.4】【第三人称:0.2】

比较疑惑的是这个权重和lerp怎么算的。

返回最后一个,然后往前面混合。

【某大招: 0.1】【开镜:0.6】【第三人称:0.4】

返回的就是第三人称的位置,但是会参考,开镜和某大招的blend

    for (int32 StackIndex = (StackSize - 2); StackIndex >= 0; --StackIndex)
    {
        CameraMode = CameraModeStack[StackIndex];
        check(CameraMode);

        OutCameraModeView.Blend(CameraMode->GetCameraModeView(), CameraMode->GetBlendWeight());
    }

混合一个也清楚了,看看文件夹下其他文件

总结一下

ALyraPlayerCameraManager::UpdateViewTarget 通过修改 FTViewTarget& OutVT 影响摄像机数值,

实际上在他的父类

Super::UpdateViewTarget(OutVT, DeltaTime);

有一行

CamActor->GetCameraComponent()->GetCameraView(DeltaTime, OutVT.POV);

通过camera comp 的 GetCameraView 修改摄像机。所以 UpdateViewTarget 这里又由于UI Camera在项目实际没用到,总得来说就是 GetCameraView 控制了摄像机属性

ALyraPlayerCameraManager 的其他东西 UICamera 当前没用,只是告诉你这里可以扩展

然后就是 ULyraCameraComponent 属性有个栈 TObjectPtr CameraModeStack;

但是他不是数据结构意义上的栈,他拥有从中间扣一个元素到开头的功能。就按TArray理解吧

排除初始化,销毁,get, set, debug 接口。核心流程是 GetCameraView

他会:

CameraModeStack->PushCameraMode(CameraMode); 检查当前状态是否有新mode可以加入

CameraModeStack->EvaluateStack(DeltaTime, CameraModeView);

  • 对所有mode做更新,累加权重和摄像机属性,删除权重>=1的完成混合的mode
  • 拿出最后一个mode,并从倒2开始向前混合每一个过度中的mode,然后返回

修改摄像机属性

ILyraCameraAssistInterface

class ILyraCameraAssistInterface
{
    GENERATED_BODY()

public:
    /**
     * 获取允许相机穿透的Actor列表,在第三人称相机中非常实用。
     * 比如第三人称跟拍相机需要忽略视角目标、玩家Pawn、载具等对象时,可将其加入该列表,避免相机被这些自身相关对象阻挡。
     */
    virtual void GetIgnoredActorsForCameraPentration(TArray<const AActor*>& OutActorsAllowPenetration) const { }

    /**
     * 相机需要防止穿透的目标Actor。
     * 正常情况下,该目标几乎都是相机的视角目标(ViewTarget),若不实现此接口,将沿用这一默认逻辑。
     * 但部分场景中,相机的视角目标和需要始终保持在画面内的根Actor并非同一个对象,此时可通过该接口自定义目标。
     */
    virtual TOptional<AActor*> GetCameraPreventPenetrationTarget() const
    {
        return TOptional<AActor*>();
    }

    /**
     * 当相机穿透焦点目标时触发的回调。
     * 若需要在相机与目标重叠时隐藏该目标Actor(比如相机穿模角色时隐藏角色模型),可在该接口中实现相关逻辑。
     */
    virtual void OnCameraPenetratingTarget() { }
};

ULyraCameraMode_ThirdPerson

重要的是重写了这个函数

void ULyraCameraMode::UpdateView(float DeltaTime)
{
    FVector PivotLocation = GetPivotLocation();
    FRotator PivotRotation = GetPivotRotation();

    PivotRotation.Pitch = FMath::ClampAngle(PivotRotation.Pitch, ViewPitchMin, ViewPitchMax);

    View.Location = PivotLocation;
    View.Rotation = PivotRotation;
    View.ControlRotation = View.Rotation;
    View.FieldOfView = FieldOfView;
}

先过一遍属性

/**
 * ULyraCameraMode_ThirdPerson
 *
 *  A basic third person camera mode.
 */
UCLASS(Abstract, Blueprintable)
class ULyraCameraMode_ThirdPerson : public ULyraCameraMode
{
    GENERATED_BODY()

    // ...
protected:

    // 一个混合的曲线,具体代码再看
    UPROPERTY(EditDefaultsOnly, Category = "Third Person", Meta = (EditCondition = "!bUseRuntimeFloatCurves"))
    TObjectPtr<const UCurveVector> TargetOffsetCurve;

    // UE-103986: Live editing of RuntimeFloatCurves during PIE does not work (unlike curve assets).
    // Once that is resolved this will become the default and TargetOffsetCurve will be removed.
    // 看起来是PIE修改不能实时更新,有BUG所以有这几个变量,但是后面打算去掉
    UPROPERTY(EditDefaultsOnly, Category = "Third Person")
    bool bUseRuntimeFloatCurves;

    UPROPERTY(EditDefaultsOnly, Category = "Third Person", Meta = (EditCondition = "bUseRuntimeFloatCurves"))
    FRuntimeFloatCurve TargetOffsetX;

    UPROPERTY(EditDefaultsOnly, Category = "Third Person", Meta = (EditCondition = "bUseRuntimeFloatCurves"))
    FRuntimeFloatCurve TargetOffsetY;

    UPROPERTY(EditDefaultsOnly, Category = "Third Person", Meta = (EditCondition = "bUseRuntimeFloatCurves"))
    FRuntimeFloatCurve TargetOffsetZ;

    // 改变蹲伏偏移量淡入或淡出的速度
    UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = "Third Person")
    float CrouchOffsetBlendMultiplier = 5.0f;

    // Penetration prevention 渗透防护,我猜是那个摄像机和角色之间有东西的方案
public:
    // 混入混出
    UPROPERTY(EditAnywhere, BlueprintReadWrite, Category="Collision")
    float PenetrationBlendInTime = 0.1f;

    UPROPERTY(EditAnywhere, BlueprintReadWrite, Category="Collision")
    float PenetrationBlendOutTime = 0.15f;

    /** If true, 进行碰撞检测以防止相机进入世界内部 */
    UPROPERTY(EditAnywhere, BlueprintReadWrite, Category="Collision")
    bool bPreventPenetration = true;

    /** If true, 尝试检测附近的墙壁并预先移动相机。有助于防止突然跳出 */
    UPROPERTY(EditAnywhere, BlueprintReadWrite, Category="Collision")
    bool bDoPredictiveAvoidance = true;

    UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = "Collision")
    float CollisionPushOutDistance = 2.f;

    /** 当相机因穿透而被推至其最大距离的这一百分比时 */
    UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = "Collision")
    float ReportPenetrationPercent = 0.f;

    /**
     * These are the feeler rays that are used to find where to place the camera.
     * Index: 0  : This is the normal feeler we use to prevent collisions.
     * Index: 1+ : These feelers are used if you bDoPredictiveAvoidance=true, to scan for potential impacts if the player
     *             were to rotate towards that direction and primitively collide the camera so that it pulls in before
     *             impacting the occluder.
     * 这些是用于确定相机放置位置的探测射线。
     * 索引:0 :这是我们用于防止碰撞的常规探测射线。
     * 索引:1+ :如果启用 bDoPredictiveAvoidance=true,这些探测射线将用于扫描潜在的碰撞情况,
     * 即当玩家转向该方向时,相机可能会与遮挡物发生初步碰撞,从而在碰撞发生前将相机拉回。
     */
    UPROPERTY(EditDefaultsOnly, Category = "Collision")
    TArray<FLyraPenetrationAvoidanceFeeler> PenetrationAvoidanceFeelers;

    UPROPERTY(Transient)
    float AimLineToDesiredPosBlockedPct;

    UPROPERTY(Transient)
    TArray<TObjectPtr<const AActor>> DebugActorsHitDuringCameraPenetration;

#if ENABLE_DRAW_DEBUG
    mutable float LastDrawDebugTime = -MAX_FLT;
#endif

protected:
    // 蹲伏相关的流程
    void SetTargetCrouchOffset(FVector NewTargetOffset);
    void UpdateCrouchOffset(float DeltaTime);

    FVector InitialCrouchOffset = FVector::ZeroVector;
    FVector TargetCrouchOffset = FVector::ZeroVector;
    float CrouchOffsetBlendPct = 1.0f;
    FVector CurrentCrouchOffset = FVector::ZeroVector;

};

蹲伏过度,碰撞检测避免遮挡

FLyraPenetrationAvoidanceFeeler

/**
 * 用于相机穿透规避的触须射线定义结构体
 */
USTRUCT()
struct FLyraPenetrationAvoidanceFeeler
{
    GENERATED_BODY()

    /** 
    描述相对于主射线的偏移旋转量,因为要打多条射线检测才会准确,所以有主射线的概念
    然后就有了偏转角,多打几条
    */
    UPROPERTY(EditAnywhere, Category=PenetrationAvoidanceFeeler)
    FRotator AdjustmentRot;

    /** 
    当触须射线检测到世界几何体碰撞时,对相机最终位置的影响权重 
    比如多条射线,同事检测到了某个物体,不是一次碰撞就定义为阻挡,而是权重累加
    */
    UPROPERTY(EditAnywhere, Category=PenetrationAvoidanceFeeler)
    float WorldWeight;

    /** 
    当触须射线检测到APawn类对象碰撞时,对相机最终位置的影响权重(设为0则完全不检测与角色的碰撞)

    */
    UPROPERTY(EditAnywhere, Category=PenetrationAvoidanceFeeler)
    float PawnWeight;

    /** 该触须射线执行碰撞追踪时,所使用的碰撞检测范围/半径 */
    UPROPERTY(EditAnywhere, Category=PenetrationAvoidanceFeeler)
    float Extent;

    /** 
    若上一帧未检测到任何碰撞,该触须射线两次碰撞追踪之间的最小帧间隔 
    没必要每帧都检测,所以这个限制一下
    */
    UPROPERTY(EditAnywhere, Category=PenetrationAvoidanceFeeler)
    int32 TraceInterval;

    /** 距离该触须射线下次执行碰撞追踪的剩余帧数 */
    UPROPERTY(transient)
    int32 FramesUntilNextTrace;
}

ULyraCameraMode_ThirdPerson::UpdateView

void ULyraCameraMode_ThirdPerson::UpdateForTarget(float DeltaTime)
{

    if (const ACharacter* TargetCharacter = Cast<ACharacter>(GetTargetActor()))
    {
        if (TargetCharacter->IsCrouched())
        {
            const ACharacter* TargetCharacterCDO = TargetCharacter->GetClass()->GetDefaultObject<ACharacter>();
            const float CrouchedHeightAdjustment = TargetCharacterCDO->CrouchedEyeHeight - TargetCharacterCDO->BaseEyeHeight;

            SetTargetCrouchOffset(FVector(0.f, 0.f, CrouchedHeightAdjustment));

            return;
        }
    }

    SetTargetCrouchOffset(FVector::ZeroVector);
}

感觉是否蹲伏状态,设置偏移和混合。

从 CrouchedHeightAdjustment (站立和蹲伏的眼睛高度) 负值

初始化一下,从0,过渡到这个值

void ULyraCameraMode_ThirdPerson::SetTargetCrouchOffset(FVector NewTargetOffset)
{
    CrouchOffsetBlendPct = 0.0f; // 混合百分比
    InitialCrouchOffset = CurrentCrouchOffset;
    TargetCrouchOffset = NewTargetOffset;
}
void ULyraCameraMode_ThirdPerson::UpdateCrouchOffset(float DeltaTime)
{
    if (CrouchOffsetBlendPct < 1.0f)
    {
        // 加百分比和设置偏移
        CrouchOffsetBlendPct = FMath::Min(CrouchOffsetBlendPct + DeltaTime * CrouchOffsetBlendMultiplier, 1.0f);
        CurrentCrouchOffset = FMath::InterpEaseInOut(InitialCrouchOffset, TargetCrouchOffset, CrouchOffsetBlendPct, 1.0f);
    }
    else
    {
        CurrentCrouchOffset = TargetCrouchOffset;
        CrouchOffsetBlendPct = 1.0f;
    }
}
void ULyraCameraMode_ThirdPerson::UpdateView(float DeltaTime)
{
    UpdateForTarget(DeltaTime);
    UpdateCrouchOffset(DeltaTime);

    FVector PivotLocation = GetPivotLocation() + CurrentCrouchOffset;
    FRotator PivotRotation = GetPivotRotation();

    PivotRotation.Pitch = FMath::ClampAngle(PivotRotation.Pitch, ViewPitchMin, ViewPitchMax);

    View.Location = PivotLocation;
    View.Rotation = PivotRotation;
    View.ControlRotation = View.Rotation;
    View.FieldOfView = FieldOfView;

    // Apply third person offset using pitch.
    // 其他没什么,父类是直接赋值,这里多了一个 + 曲线
    if (!bUseRuntimeFloatCurves)
    {
        if (TargetOffsetCurve)
        {
            const FVector TargetOffset = TargetOffsetCurve->GetVectorValue(PivotRotation.Pitch);
            View.Location = PivotLocation + PivotRotation.RotateVector(TargetOffset);
        }
    }
    else
    {
        FVector TargetOffset(0.0f);

        TargetOffset.X = TargetOffsetX.GetRichCurveConst()->Eval(PivotRotation.Pitch);
        TargetOffset.Y = TargetOffsetY.GetRichCurveConst()->Eval(PivotRotation.Pitch);
        TargetOffset.Z = TargetOffsetZ.GetRichCurveConst()->Eval(PivotRotation.Pitch);

        View.Location = PivotLocation + PivotRotation.RotateVector(TargetOffset);
    }

    // Adjust final desired camera location to prevent any penetration
    // 会后是摄像机和自己之间有碰撞
    UpdatePreventPenetration(DeltaTime);
}

一些概念

SafeLocation:摄像机保底位置,一般是角色位置,保底就是贴着角色

desired camera position 期望位置:摄像机算出来的位置,但是由于阻挡,可能需要二次调整

实际位置:计算阻挡后的位置

【安全】 【实际位置/ 墙】 【期望】

---------------------A

----------------------------------------B

一条线,摄像机在期望位置,然后看向角色的地方,有墙,摄像机挪到了墙前面一点的实际位置。

阻挡百分比:(实际位置-安全位置)/ (期望位置-安全位置) 【0,1】

A/B= 1 代表没有阻挡

A/B= 0.4 说明阻挡物在具体安全位置0.4,距离理想位置0.6

没被阻挡,摄像机应该去理想位置,直到被阻挡或者达到理想位置。

摄像机被阻挡:应该退到安全位置,直到不被阻挡或达到安全位置。

阻挡用射线检测,检测中应该忽略不影响摄像机视线的场景对象,也可以根据对象权重来处理阻挡。

ULyraCameraMode_ThirdPerson::ULyraCameraMode_ThirdPerson()
{
    TargetOffsetCurve = nullptr;

    PenetrationAvoidanceFeelers.Add(FLyraPenetrationAvoidanceFeeler(FRotator(+00.0f, +00.0f, 0.0f), 1.00f, 1.00f, 14.f, 0));
    PenetrationAvoidanceFeelers.Add(FLyraPenetrationAvoidanceFeeler(FRotator(+00.0f, +16.0f, 0.0f), 0.75f, 0.75f, 00.f, 3));
    PenetrationAvoidanceFeelers.Add(FLyraPenetrationAvoidanceFeeler(FRotator(+00.0f, -16.0f, 0.0f), 0.75f, 0.75f, 00.f, 3));
    PenetrationAvoidanceFeelers.Add(FLyraPenetrationAvoidanceFeeler(FRotator(+00.0f, +32.0f, 0.0f), 0.50f, 0.50f, 00.f, 5));
    PenetrationAvoidanceFeelers.Add(FLyraPenetrationAvoidanceFeeler(FRotator(+00.0f, -32.0f, 0.0f), 0.50f, 0.50f, 00.f, 5));
    PenetrationAvoidanceFeelers.Add(FLyraPenetrationAvoidanceFeeler(FRotator(+20.0f, +00.0f, 0.0f), 1.00f, 1.00f, 00.f, 4));
    PenetrationAvoidanceFeelers.Add(FLyraPenetrationAvoidanceFeeler(FRotator(-20.0f, +00.0f, 0.0f), 0.50f, 0.50f, 00.f, 4));
}

构造函数初始化了一堆射线

void ULyraCameraMode_ThirdPerson::UpdatePreventPenetration(float DeltaTime)
{
    if (!bPreventPenetration)
    {
        return;
    }

    // 角色
    AActor* TargetActor = GetTargetActor();
    APawn* TargetPawn = Cast<APawn>(TargetActor);

    // controller
    AController* TargetController = TargetPawn ? TargetPawn->GetController() : nullptr;

    // controller 的摄像机辅助接口, 在后面一点地方用
    ILyraCameraAssistInterface* TargetControllerAssist = Cast<ILyraCameraAssistInterface>(TargetController);

    // actor接口获取 【防止摄像机进入角色的】对象集合,如果没有,那就是 actor 本身一个
    ILyraCameraAssistInterface* TargetActorAssist = Cast<ILyraCameraAssistInterface>(TargetActor);
    TOptional<AActor*> OptionalPPTarget = TargetActorAssist ? TargetActorAssist->GetCameraPreventPenetrationTarget() : TOptional<AActor*>();

    // 至少要有一个,不能穿透的对象,一般是主角
    AActor* PPActor = OptionalPPTarget.IsSet() ? OptionalPPTarget.GetValue() : TargetActor;
    ILyraCameraAssistInterface* PPActorAssist = OptionalPPTarget.IsSet() ? Cast<ILyraCameraAssistInterface>(PPActor) : nullptr;

    // ...
}

【安全位置/角色】 【实际位置/ 墙】 【期望位置/摄像机】


void ULyraCameraMode_ThirdPerson::UpdatePreventPenetration(float DeltaTime)
{
    if (!bPreventPenetration)
    {
        return;
    }

    // ...

    const UPrimitiveComponent* PPActorRootComponent = Cast<UPrimitiveComponent>(PPActor->GetRootComponent());
    if (PPActorRootComponent)
    {
        // 首先要选择安全位置,避免瞄准的时候移动过多
        // 我们的相机就是我们的瞄准镜,所以我们希望保持瞄准状态,并尽可能让其稳定且平稳。
        // 选择胶囊体上离我们的瞄准线最近的点。
        FVector ClosestPointOnLineToCapsuleCenter;
        FVector SafeLocation = PPActor->GetActorLocation();
        // 点到线的距离,胶囊体中心点 在视野这条线最近的点(线的定义是摄像机点和方向的线)
        FMath::PointDistToLine(SafeLocation, View.Rotation.Vector(), View.Location, ClosestPointOnLineToCapsuleCenter);

        // 调整 Safe distance 高度和 aim line 一样, 但是要在胶囊体里面 ?
        // 摄像机不能越过头或者低过脚,的一个限制,但是下面马上就覆盖掉了
        float const PushInDistance = PenetrationAvoidanceFeelers[0].Extent + CollisionPushOutDistance;
        float const MaxHalfHeight = PPActor->GetSimpleCollisionHalfHeight() - PushInDistance;
        SafeLocation.Z = FMath::Clamp(ClosestPointOnLineToCapsuleCenter.Z, SafeLocation.Z - MaxHalfHeight, SafeLocation.Z + MaxHalfHeight);

        // 这里 SafeLocation 不是又覆盖了???
        // 返回与最近的 Body Instance 表面的距离的平方
        // SafeLocation 这个坐标是 out 的也更新了
        float DistanceSqr;
        PPActorRootComponent->GetSquaredDistanceToCollision(ClosestPointOnLineToCapsuleCenter, DistanceSqr, SafeLocation);

        // Push back inside capsule to avoid initial penetration when doing line checks.
        // 往胶囊体里面推一点,不然在边缘可能视野有问题
        if (PenetrationAvoidanceFeelers.Num() > 0)
        {
            SafeLocation += (SafeLocation - ClosestPointOnLineToCapsuleCenter).GetSafeNormal() * PushInDistance;
        }

        // Then aim line to desired camera position
        // 瞄准线到  desired camera position (期待摄像机位置)
        // 这里反正是说要进行摄像机前的墙检测, 但是应该是直接修改数值的
        bool const bSingleRayPenetrationCheck = !bDoPredictiveAvoidance; // 要不要单次射线检测,还是多几条射线
        PreventCameraPenetration(*PPActor, SafeLocation, View.Location, DeltaTime, AimLineToDesiredPosBlockedPct, bSingleRayPenetrationCheck);

        // 结束检测后,如果摄像机进入了角色内,就遍历所有接口的actor调用 OnCameraPenetratingTarget
        // bHideViewTargetPawnNextFrame = true; 下一帧,actor就会隐藏
        // 那么什么时候恢复呢,看 ALyraPlayerController::UpdateHiddenComponents
        ILyraCameraAssistInterface* AssistArray[] = { TargetControllerAssist, TargetActorAssist, PPActorAssist };

        if (AimLineToDesiredPosBlockedPct < ReportPenetrationPercent)
        {
            for (ILyraCameraAssistInterface* Assist : AssistArray)
                {
                    if (Assist)
                    {
                        // camera is too close, tell the assists
                        Assist->OnCameraPenetratingTarget();
                    }
                }
        }
    }
}

到这里,我们留有疑问的是

PreventCameraPenetration 和隐藏后的恢复逻辑,我们逐个看看

void ULyraCameraMode_ThirdPerson::PreventCameraPenetration(class AActor const& ViewTarget, FVector const& SafeLoc, FVector& CameraLoc, float const& DeltaTime, float& DistBlockedPct, bool bSingleRayOnly)
{
    // 先看参数 ViewTarget,主角actor
    // SafeLoc 保底安全位置。图:【安全位置/角色】      【实际位置/ 墙】     <-【期望位置/摄像机】
    // CameraLoc 修改这个会参与我们摄像机栈的计算。
    // AimLineToDesiredPosBlockedPct 看下配置没有,默认值是0,而且他都标了 Transient
    // bSingleRayPenetrationCheck 单射线还是多射线检测?
#if ENABLE_DRAW_DEBUG
    DebugActorsHitDuringCameraPenetration.Reset();
#endif

    float HardBlockedPct = DistBlockedPct;
    float SoftBlockedPct = DistBlockedPct;

    FVector BaseRay = CameraLoc - SafeLoc;
    FRotationMatrix BaseRayMatrix(BaseRay.Rotation());
    FVector BaseRayLocalUp, BaseRayLocalFwd, BaseRayLocalRight;

    BaseRayMatrix.GetScaledAxes(BaseRayLocalFwd, BaseRayLocalRight, BaseRayLocalUp);

    float DistBlockedPctThisFrame = 1.f;

    int32 const NumRaysToShoot = bSingleRayOnly ? FMath::Min(1, PenetrationAvoidanceFeelers.Num()) : PenetrationAvoidanceFeelers.Num();
    FCollisionQueryParams SphereParams(SCENE_QUERY_STAT(CameraPen), false, nullptr/*PlayerCamera*/);

    SphereParams.AddIgnoredActor(&ViewTarget);

    // ...
}

。。。

下次一定

上一篇
下一篇