一、 錄影機
1. 錄影機/觀察空間
定義一個錄影機,我們需要一個錄影機在世界空間中的位置、觀察的方向、一個指向它的右測的向量以及一個指向它上方的向量。
錄影機的位置
擷取錄影機位置很簡單。錄影機位置簡單來說就是世界空間中代表錄影機位置的向量。我們把錄影機位置設定為前面教程中的那個相同的位置:
glm::vec3 cameraPos = glm::vec3(, , );
錄影機方向
用錄影機位置向量減去場景原點向量的結果就是錄影機指向向量。由于我們知道錄影機指向z軸負方向,我們希望方向向量指向錄影機的z軸正方向。如果我們改變相減的順序,我們就會獲得一個指向錄影機正z軸方向的向量。
glm::vec3 cameraTarget = glm::vec3(.0f, .0f, .0f);
glm::vec3 cameraDirection = glm::normalize(cameraPos - cameraTarget);
右軸
我們需要的另一個向量是一個右向量(Right Vector),它代表錄影機空間的x軸的正方向。為擷取右向量我們需要先使用一個小技巧:定義一個上向量(Up Vector)。我們把上向量和第二步得到的錄影機方向向量進行叉乘。兩個向量叉乘的結果就是同時垂直于兩向量的向量,是以我們會得到指向x軸正方向的那個向量(如果我們交換兩個向量的順序就會得到相反的指向x軸負方向的向量):
glm::vec3 up = glm::vec3(.0f, f, .0f);
glm::vec3 cameraRight = glm::normalize(glm::cross(up, cameraDirection));
上軸
現在我們已經有了x軸向量和z軸向量,擷取錄影機的正y軸相對簡單;我們把右向量和方向向量(Direction Vector)進行叉乘:
glm::vec3 cameraUp = glm::cross(cameraDirection, cameraRight);
使用這些錄影機向量我們就可以建立一個LookAt矩陣了,它在建立錄影機的時候非常有用。
2. LookAt
現在我們有了3個互相垂直的軸和一個定義錄影機空間的位置坐标,我們可以建立我們自己的LookAt矩陣了:
R是右向量,U是上向量,D是方向向量P是錄影機位置向量。注意,位置向量是相反的,因為我們最終希望把世界平移到與我們自身移動的相反方向。
GLM已經提供了這些支援。我們要做的隻是定義一個錄影機位置,一個目标位置和一個表示上向量的世界空間中的向量(我們使用上向量計算右向量)。接着GLM就會建立一個LookAt矩陣,我們可以把它當作我們的觀察矩陣:
glm::mat4 view;
view = glm::lookAt(glm::vec3(, , ),
glm::vec3(, , ),
glm::vec3(, , ));
3. 自由移動
首先我們必須設定一個錄影機系統,在我們的程式前面定義一些錄影機變量很有用:
glm::vec3 cameraPos = glm::vec3(, , );
glm::vec3 cameraFront = glm::vec3(, , -);
glm::vec3 cameraUp = glm::vec3(, , );
LookAt函數現在成了:
我們首先設定之前定義的cameraPos為錄影機位置。方向(Direction)是目前的位置加上我們剛剛定義的方向向量。這樣能保證無論我們怎麼移動,錄影機都會注視目标。我們在按下某個按鈕時更新cameraPos向量。
我們已經為GLFW的鍵盤輸入定義了一個key_callback函數,我們來添加幾個新按鍵指令:
void key_callback(GLFWwindow* window, int key, int scancode, int action, int mode)
{
...
GLfloat cameraSpeed = 05f;
if (key == GLFW_KEY_W)
cameraPos += cameraSpeed * cameraFront;
if (key == GLFW_KEY_S)
cameraPos -= cameraSpeed * cameraFront;
if (key == GLFW_KEY_A)
cameraPos -= glm::normalize(glm::cross(cameraFront, cameraUp)) * cameraSpeed;
if (key == GLFW_KEY_D)
cameraPos += glm::normalize(glm::cross(cameraFront, cameraUp)) * cameraSpeed;
}
當我們按下WASD鍵,錄影機的位置都會相應更新。如果我們希望向前或向後移動,我們就把位置向量加上或減去方向向量。如果我們希望向旁邊移動,我們做一個叉乘來建立一個右向量,沿着它移動就可以了。這樣就建立了類似使用錄影機橫向、前後移動的效果。
4. 視角移動
歐拉角
歐拉角(Euler Angle)是表示3D空間中可以表示任何旋轉的三個值,由萊昂哈德·歐拉在18世紀提出。有三種歐拉角:俯仰角(Pitch)、偏航角(Yaw)和滾轉角(Roll),下面的圖檔展示了它們的含義:
俯仰角是描述我們如何往上和往下看的角,它在第一張圖中表示。第二張圖顯示了偏航角,偏航角表示我們往左和往右看的大小。滾轉角代表我們如何翻滾錄影機。
對于我們的錄影機系統來說,我們隻關心俯仰角和偏航角,是以我們不會讨論滾轉角。用一個給定的俯仰角和偏航角,我們可以把它們轉換為一個代表新的方向向量的3D向量。
direction.x = cos(glm::radians(pitch)) * cos(glm::radians(yaw));//譯注:direction代表錄影機的“前”軸,但此前軸是和本文第一幅圖檔的第二個錄影機的direction是相反的
direction.y = sin(glm::radians(pitch));
direction.z = cos(glm::radians(pitch)) * sin(glm::radians(yaw));
5. 滑鼠輸入
首先我們要告訴GLFW,應該隐藏光标,并捕捉(Capture)它。捕捉滑鼠意味着當應用集中焦點到滑鼠上的時候光标就應該留在視窗中(除非應用拾取焦點或退出)。我們可以進行簡單的配置:
glfwSetInputMode(window, GLFW_CURSOR, GLFW_CURSOR_DISABLED);
這個函數調用後,無論我們怎麼去移動滑鼠,它都不會顯示了,也不會離開視窗。對于FPS錄影機系統來說很好:
為計算俯仰角和偏航角我們需要告訴GLFW監聽滑鼠移動事件。我們用下面的原型建立一個回調函數來做這件事(和鍵盤輸入差不多):
void mouse_callback(GLFWwindow* window, double xpos, double ypos);
這裡的xpos和ypos代表目前滑鼠的位置。我們注冊了GLFW的回調函數,滑鼠一移動mouse_callback函數就被調用:
glfwSetCursorPosCallback(window, mouse_callback);
在處理FPS風格的錄影機滑鼠輸入的時候,我們必須在擷取最終的方向向量之前做下面這幾步:
- 計算滑鼠和上一幀的偏移量。
- 把偏移量添加到錄影機和俯仰角和偏航角中。
- 對偏航角和俯仰角進行最大和最小值的限制。
- 計算方向向量。
我們必須先儲存上一幀的滑鼠位置,我們把它的初始值設定為螢幕的中心(螢幕的尺寸是800乘600):
GLfloat lastX = , lastY = ;
然後在回調函數中計算方向向量:
void mouse_callback(GLFWwindow* window, double xpos, double ypos)
{
//第一次則将目前位置設定為上一幀的位置來避免抖動
if (firstMouse)
{
lastX = xpos;
lastY = ypos;
firstMouse = false;
}
//計算偏移量并儲存目前幀坐标為上一幀坐标
GLfloat xoffset = xpos - lastX;
GLfloat yoffset = lastY - ypos;
lastX = xpos;
lastY = ypos;
//縮放偏轉量
GLfloat sensitivity = ;
xoffset *= sensitivity;
yoffset *= sensitivity;
//計算偏轉位置
yaw += xoffset;
pitch += yoffset;
//控制偏轉範圍
if (pitch > f)
pitch = f;
if (pitch < -f)
pitch = -f;
//計算實際方向向量
glm::vec3 front;
front.x = cos(glm::radians(yaw)) * cos(glm::radians(pitch));
front.y = sin(glm::radians(pitch));
front.z = sin(glm::radians(yaw)) * cos(glm::radians(pitch));
cameraFront = glm::normalize(front);
}
6. 縮放
當視野變小時可視區域就會減小,産生放大了的感覺。我們用滑鼠滾輪來放大。和滑鼠移動、鍵盤輸入一樣我們需要一個滑鼠滾輪的回調函數:
void scroll_callback(GLFWwindow* window, double xoffset, double yoffset)
{
if (aspect >= && aspect <= )
aspect -= yoffset;
if (aspect <= )
aspect = ;
if (aspect >= )
aspect = ;
}
yoffset值代表我們滾動的大小。當scroll_callback函數調用後,我們改變全局aspect變量的内容。因為45.0f是預設的fov,我們将會把縮放級别限制在1.0f到45.0f。
我們現在在每一幀都必須把透視投影矩陣上傳到GPU,但這一次使aspect變量作為它的fov:
projection = glm::perspective(aspect, (GLfloat)WIDTH / (GLfloat)HEIGHT, , );
最後不要忘記注冊滾動回調函數:
glfwSetScrollCallback(window, scroll_callback);
7. 錄影機類
直接貼出原教程給出的錄影機類:
#pragma once
// Std. Includes
#include <vector>
// GL Includes
#include <GL/glew.h>
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
// Defines several possible options for camera movement. Used as abstraction to stay away from window-system specific input methods
enum Camera_Movement {
FORWARD,
BACKWARD,
LEFT,
RIGHT
};
// Default camera values
const GLfloat YAW = -;
const GLfloat PITCH = ;
const GLfloat SPEED = ;
const GLfloat SENSITIVTY = ;
const GLfloat ZOOM = ;
// An abstract camera class that processes input and calculates the corresponding Eular Angles, Vectors and Matrices for use in OpenGL
class Camera
{
public:
// Camera Attributes
glm::vec3 Position;
glm::vec3 Front;
glm::vec3 Up;
glm::vec3 Right;
glm::vec3 WorldUp;
// Eular Angles
GLfloat Yaw;
GLfloat Pitch;
// Camera options
GLfloat MovementSpeed;
GLfloat MouseSensitivity;
GLfloat Zoom;
// Constructor with vectors
Camera(glm::vec3 position = glm::vec3(, , ), glm::vec3 up = glm::vec3(, , ), GLfloat yaw = YAW, GLfloat pitch = PITCH) : Front(glm::vec3(, , -)), MovementSpeed(SPEED), MouseSensitivity(SENSITIVTY), Zoom(ZOOM)
{
this->Position = position;
this->WorldUp = up;
this->Yaw = yaw;
this->Pitch = pitch;
this->updateCameraVectors();
}
// Constructor with scalar values
Camera(GLfloat posX, GLfloat posY, GLfloat posZ, GLfloat upX, GLfloat upY, GLfloat upZ, GLfloat yaw, GLfloat pitch) : Front(glm::vec3(, , -)), MovementSpeed(SPEED), MouseSensitivity(SENSITIVTY), Zoom(ZOOM)
{
this->Position = glm::vec3(posX, posY, posZ);
this->WorldUp = glm::vec3(upX, upY, upZ);
this->Yaw = yaw;
this->Pitch = pitch;
this->updateCameraVectors();
}
// Returns the view matrix calculated using Eular Angles and the LookAt Matrix
glm::mat4 GetViewMatrix()
{
return glm::lookAt(this->Position, this->Position + this->Front, this->Up);
}
// Processes input received from any keyboard-like input system. Accepts input parameter in the form of camera defined ENUM (to abstract it from windowing systems)
void ProcessKeyboard(Camera_Movement direction, GLfloat deltaTime)
{
GLfloat velocity = this->MovementSpeed * deltaTime;
if (direction == FORWARD)
this->Position += this->Front * velocity;
if (direction == BACKWARD)
this->Position -= this->Front * velocity;
if (direction == LEFT)
this->Position -= this->Right * velocity;
if (direction == RIGHT)
this->Position += this->Right * velocity;
}
// Processes input received from a mouse input system. Expects the offset value in both the x and y direction.
void ProcessMouseMovement(GLfloat xoffset, GLfloat yoffset, GLboolean constrainPitch = true)
{
xoffset *= this->MouseSensitivity;
yoffset *= this->MouseSensitivity;
this->Yaw += xoffset;
this->Pitch += yoffset;
// Make sure that when pitch is out of bounds, screen doesn't get flipped
if (constrainPitch)
{
if (this->Pitch > )
this->Pitch = ;
if (this->Pitch < -)
this->Pitch = -;
}
// Update Front, Right and Up Vectors using the updated Eular angles
this->updateCameraVectors();
}
// Processes input received from a mouse scroll-wheel event. Only requires input on the vertical wheel-axis
void ProcessMouseScroll(GLfloat yoffset)
{
if (this->Zoom >= && this->Zoom <= )
this->Zoom -= yoffset;
if (this->Zoom <= )
this->Zoom = ;
if (this->Zoom >= )
this->Zoom = ;
}
private:
// Calculates the front vector from the Camera's (updated) Eular Angles
void updateCameraVectors()
{
// Calculate the new Front vector
glm::vec3 front;
front.x = cos(glm::radians(this->Yaw)) * cos(glm::radians(this->Pitch));
front.y = sin(glm::radians(this->Pitch));
front.z = sin(glm::radians(this->Yaw)) * cos(glm::radians(this->Pitch));
this->Front = glm::normalize(front);
// Also re-calculate the Right and Up vector
this->Right = glm::normalize(glm::cross(this->Front, this->WorldUp)); // Normalize the vectors, because their length gets closer to 0 the more you look up or down which results in slower movement.
this->Up = glm::normalize(glm::cross(this->Right, this->Front));
}
};
轉載請注明出處:http://blog.csdn.net/ylbs110/article/details/52506033
二、 示例
代碼:
#include <iostream>
using namespace std;
// GLEW
#define GLEW_STATIC
#include <GL/glew.h>
// GLFW
#include <GLFW/glfw3.h>
// SOIL
#include <SOIL\SOIL.h>
#include <glm\glm.hpp>
#include <glm\gtc\matrix_transform.hpp>
#include <glm\gtc\type_ptr.hpp>
#include "Shader.h"
#include "Camera.h"
const GLuint WIDTH = , HEIGHT = ;
void key_callback(GLFWwindow* window, int key, int scancode, int action, int mode);
GLuint loadTexture(string fileName, GLint REPEAT, GLint FILTER);
void do_movement();
void scroll_callback(GLFWwindow* window, double xoffset, double yoffset);
void mouse_callback(GLFWwindow* window, double xpos, double ypos);
// Shaders
const GLchar* vertexShaderSource = "#version 330 core\n"
"layout (location = 0) in vec3 position;\n"//頂點資料傳入的坐标
"layout (location = 1) in vec3 color;\n"//頂點資料傳入的顔色
"layout (location = 2) in vec2 texCoord;\n"//頂點資料傳入的顔色
"uniform vec4 offset;\n"
"uniform float mixPar;\n"
"uniform mat4 model;\n"
"uniform mat4 view;\n"
"uniform mat4 projection;\n"
"out vec3 Color;\n"
"out vec2 TexCoord;\n"
"out vec4 vertexColor;\n"//将頂點坐标作為顔色傳入片段着色器,測試所得效果
"out float MixPar;\n"
"void main()\n"
"{\n"
"gl_Position =projection * view * model* vec4(position.x, position.y, position.z, 1.0)+offset;\n"
"vertexColor=gl_Position*0.2f;\n"
"Color=color*0.2f;\n"
"TexCoord=texCoord;\n"
"MixPar=mixPar;\n"
"}\0";
const GLchar* fragmentShaderSource = "#version 330 core\n"
"out vec4 color;\n"
"in vec4 vertexColor;\n"
"in vec3 Color;\n"
"in vec2 TexCoord;\n"
"in float MixPar;\n"
"uniform sampler2D ourTexture1;\n"
"uniform sampler2D ourTexture2;\n"
"void main()\n"
"{\n"
"color =mix(texture(ourTexture1, TexCoord),texture(ourTexture2, vec2(TexCoord.x,1-TexCoord.y)),MixPar)+vec4(Color, 1.0f)+vertexColor;\n"//合成兩張紋理并對第二張紋理進行翻轉操作,混合比例由上下鍵控制
"}\n\0";
Camera mainCamera;
Shader shader;//shader
GLuint texContainer, texAwesomeface;//紋理id
float key_UD = ;//混合比例
GLuint VBO, VAO;
GLfloat deltaTime = ; // 目前幀遇上一幀的時間差
GLfloat lastFrame = ; // 上一幀的時間
bool keys[];
GLfloat lastX = , lastY = ;
GLfloat scrollSpeed = ;
bool firstMouse = true;
glm::vec3 cameraPos = glm::vec3(, , );
glm::vec3 cameraFront = glm::vec3(, , -);
glm::vec3 cameraUp = glm::vec3(, , );
void shaderInit() {
shader = Shader(vertexShaderSource, fragmentShaderSource);
}
void textureInit() {
texContainer = loadTexture("container.jpg", GL_CLAMP_TO_EDGE, GL_LINEAR);
texAwesomeface = loadTexture("awesomeface.png", GL_MIRRORED_REPEAT, GL_NEAREST);
}
GLuint loadTexture(string fileName,GLint REPEAT, GLint FILTER) {
//建立紋理
GLuint texture;
glGenTextures(, &texture);
//綁定紋理
glBindTexture(GL_TEXTURE_2D, texture);
// 為目前綁定的紋理對象設定環繞、過濾方式
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, FILTER);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, FILTER);
//glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
//glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// 加載紋理
int width, height;
unsigned char* image = SOIL_load_image(fileName.c_str(), &width, &height, , SOIL_LOAD_RGB);
// 生成紋理
glTexImage2D(GL_TEXTURE_2D, , GL_RGB, width, height, , GL_RGB, GL_UNSIGNED_BYTE, image);
glGenerateMipmap(GL_TEXTURE_2D);
//釋放圖像的記憶體并解綁紋理對象
SOIL_free_image_data(image);
glBindTexture(GL_TEXTURE_2D, );
return texture;
}
void vertexObjectInit() {
//不使用索引緩沖對象用兩個三角形繪制一個梯形
// 設定頂點緩存和屬性指針
GLfloat vertices[] = {
-, -, -, , , , , ,
, -, -, , , ,, ,
, , -, , , ,, ,
, , -, , , ,, ,
-, , -, , , , , ,
-, -, -, , , ,, ,
-, -, , , , , , ,
, -, , , , ,, ,
, , , , , ,, ,
, , , , , ,, ,
-, , , , , , , ,
-, -, , , , , , ,
-, , , , , , , ,
-, , -, , , , , ,
-, -, -, , , , , ,
-, -, -, , , , , ,
-, -, , , , , , ,
-, , , , , , , ,
, , , , , ,, ,
, , -, , , ,, ,
, -, -, , , ,, ,
, -, -, , , ,, ,
, -, , , , ,, ,
, , , , , ,, ,
-, -, -, , , , , ,
, -, -, , , ,, ,
, -, , , , ,, ,
, -, , , , ,, ,
-, -, , , , , , ,
-, -, -, , , , , ,
-, , -, , , , , ,
, , -, , , ,, ,
, , , , , ,, ,
, , , , , ,, ,
-, , , , , , , ,
-, , -, , , , ,
};
//建立索引緩沖對象
glGenBuffers(, &VBO);
glGenVertexArrays(, &VAO);
glBindVertexArray(VAO);
// 把頂點數組複制到緩沖中供OpenGL使用
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
// 位置屬性
glVertexAttribPointer(, , GL_FLOAT, GL_FALSE, * sizeof(GLfloat), (GLvoid*));
glEnableVertexAttribArray();
// 顔色屬性
glVertexAttribPointer(, , GL_FLOAT, GL_FALSE, * sizeof(GLfloat), (GLvoid*)( * sizeof(GLfloat)));
glEnableVertexAttribArray();
glVertexAttribPointer(, , GL_FLOAT, GL_FALSE, * sizeof(GLfloat), (GLvoid*)( * sizeof(GLfloat)));
glEnableVertexAttribArray();
glBindBuffer(GL_ARRAY_BUFFER, );// 這個方法将頂點屬性指針注冊到VBO作為目前綁定頂點對象,然後我們就可以安全的解綁
glBindVertexArray();// 解綁 VAO (這通常是一個很好的用來解綁任何緩存/數組并防止奇怪錯誤的方法)
}
int main()
{
//初始化GLFW
glfwInit();
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, );
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, );
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);
//建立視窗對象
GLFWwindow* window = glfwCreateWindow(WIDTH, HEIGHT, "LearnOpenGL", nullptr, nullptr);
if (window == nullptr)
{
std::cout << "Failed to create GLFW window" << std::endl;
glfwTerminate();
return -;
}
glfwMakeContextCurrent(window);
//注冊鍵盤回調
glfwSetKeyCallback(window, key_callback);
//注冊滑鼠回調
glfwSetCursorPosCallback(window, mouse_callback);
//注冊滑鼠滾輪回到
glfwSetScrollCallback(window, scroll_callback);
//設定光标隐藏并捕獲
glfwSetInputMode(window, GLFW_CURSOR, GLFW_CURSOR_DISABLED);
//初始化GLEW
glewExperimental = GL_TRUE;
if (glewInit() != GLEW_OK)
{
std::cout << "Failed to initialize GLEW" << std::endl;
return -;
}
//告訴OpenGL渲染視窗尺寸大小
int width, height;
glfwGetFramebufferSize(window, &width, &height);
glViewport(, , width, height);
glEnable(GL_DEPTH_TEST);
//初始化并綁定shaders
shaderInit();
//初始化textures
textureInit();
//初始化頂點對象資料
vertexObjectInit();
mainCamera = Camera();
//讓視窗接受輸入并保持運作
while (!glfwWindowShouldClose(window))
{
GLfloat currentFrame = glfwGetTime();
deltaTime = currentFrame - lastFrame;
lastFrame = currentFrame;
//檢查事件
glfwPollEvents();
do_movement();
//渲染指令
glClearColor(, , , );
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//設定根據時間變換的x,y偏移值,最終效果為圓周運動
GLfloat timeValue = glfwGetTime();
GLfloat offsetx = (sin(timeValue) / ) + ;
GLfloat offsety = (cos(timeValue) / ) + ;
//繪制長方形
shader.Use();
//綁定兩張貼圖
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texContainer);
glUniform1i(glGetUniformLocation(shader.Program, "ourTexture1"), );
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, texAwesomeface);
glUniform1i(glGetUniformLocation(shader.Program, "ourTexture2"), );
// 更新uniform值
//設定運動軌迹
//GLint vertexorangeLocation = glGetUniformLocation(shader.Program, "offset");
//glUniform4f(vertexorangeLocation, offsetx, offsety, 0.0f, 1.0f);
//設定混合比例
GLint mixPar = glGetUniformLocation(shader.Program, "mixPar");
glUniform1f(mixPar, key_UD);
glm::mat4 model;
model = glm::rotate(model, (GLfloat)glfwGetTime() * , glm::vec3(, , ));
glm::mat4 view;
view = mainCamera.GetViewMatrix();
glm::mat4 projection;
projection = glm::perspective(mainCamera.Zoom*scrollSpeed, (float)(WIDTH / HEIGHT), , );
GLint modelLoc = glGetUniformLocation(shader.Program, "model");
glUniformMatrix4fv(modelLoc, , GL_FALSE, glm::value_ptr(model));
GLint viewLoc = glGetUniformLocation(shader.Program, "view");
glUniformMatrix4fv(viewLoc, , GL_FALSE, glm::value_ptr(view));
GLint projectionLoc = glGetUniformLocation(shader.Program, "projection");
glUniformMatrix4fv(projectionLoc, , GL_FALSE, glm::value_ptr(projection));
glm::vec3 cubePositions[] = {
glm::vec3(, , ),
glm::vec3(, , -),
glm::vec3(-, -, -),
glm::vec3(-, -, -),
glm::vec3(, -, -),
glm::vec3(-, , -),
glm::vec3(, -, -),
glm::vec3(, , -),
glm::vec3(, , -),
glm::vec3(-, , -)
};
glBindVertexArray(VAO);
for (GLuint i = ; i < ; i++)
{
glm::mat4 model;
model = glm::translate(model, cubePositions[i]);
if (i < ) {
GLfloat angle = (GLfloat)glfwGetTime()* * i;
model = glm::rotate(model, angle, glm::vec3(, , ));
glUniformMatrix4fv(modelLoc, , GL_FALSE, glm::value_ptr(model));
}
else
{
glUniformMatrix4fv(modelLoc, , GL_FALSE, glm::value_ptr(model));
}
glDrawArrays(GL_TRIANGLES, , );
}
glBindVertexArray();
//交換緩沖
glfwSwapBuffers(window);
}
glDeleteVertexArrays(, &VAO);
glDeleteBuffers(, &VBO);
//釋放資源
glfwTerminate();
return ;
}
void do_movement()
{
// 錄影機控制
GLfloat cameraSpeed = * deltaTime;
if (keys[GLFW_KEY_W])
mainCamera.ProcessKeyboard(FORWARD, deltaTime);
if (keys[GLFW_KEY_S])
mainCamera.ProcessKeyboard(BACKWARD, deltaTime);
if (keys[GLFW_KEY_A])
mainCamera.ProcessKeyboard(LEFT, deltaTime);
if (keys[GLFW_KEY_D])
mainCamera.ProcessKeyboard(RIGHT, deltaTime);
}
void key_callback(GLFWwindow* window, int key, int scancode, int action, int mode)
{
// 當使用者按下ESC鍵,我們設定window視窗的WindowShouldClose屬性為true
// 關閉應用程式
if (key == GLFW_KEY_ESCAPE && action == GLFW_PRESS)
glfwSetWindowShouldClose(window, GL_TRUE);
if (key == GLFW_KEY_UP&& action == GLFW_PRESS)//按下UP鍵增加混合比例
key_UD = key_UD + ;
if (key == GLFW_KEY_DOWN&& action == GLFW_PRESS)//按下DOWN減小混合比例
key_UD = key_UD - ;
if (action == GLFW_PRESS)
keys[key] = true;
else if (action == GLFW_RELEASE)
keys[key] = false;
}
void mouse_callback(GLFWwindow* window, double xpos, double ypos)
{
if (firstMouse)
{
lastX = xpos;
lastY = ypos;
firstMouse = false;
}
GLfloat xoffset = xpos - lastX;
GLfloat yoffset = lastY - ypos; // Reversed since y-coordinates go from bottom to left
lastX = xpos;
lastY = ypos;
mainCamera.ProcessMouseMovement(xoffset, yoffset);
}
void scroll_callback(GLFWwindow* window, double xoffset, double yoffset)
{
cout << yoffset << endl;
mainCamera.ProcessMouseScroll(yoffset);
}
結果:
wsad可以控制上下左右,滑鼠控制攝像頭方向,滾輪可以拉伸: